TY - GEN
T1 - End-to-end Speech Recognition with Word-Based Rnn Language Models
AU - Hori, Takaaki
AU - Cho, Jaejin
AU - Watanabe, Shinji
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.
AB - This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.
KW - End-to-end speech recognition
KW - attention decoder
KW - connectionist temporal classification
KW - decoding
KW - language modeling
UR - http://www.scopus.com/inward/record.url?scp=85063107807&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063107807&partnerID=8YFLogxK
U2 - 10.1109/SLT.2018.8639693
DO - 10.1109/SLT.2018.8639693
M3 - Conference contribution
AN - SCOPUS:85063107807
T3 - 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings
SP - 389
EP - 396
BT - 2018 IEEE Spoken Language Technology Workshop, SLT 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Spoken Language Technology Workshop, SLT 2018
Y2 - 18 December 2018 through 21 December 2018
ER -