This paper investigates the impact of word-based RNN language models (RNN-LMs) on the performance of end-to-end automatic speech recognition (ASR). In our prior work, we have proposed a multi-level LM, in which character-based and word-based RNN-LMs are combined in hybrid CTC/attention-based ASR. Although this multi-level approach achieves significant error reduction in the Wall Street Journal (WSJ) task, two different LMs need to be trained and used for decoding, which increase the computational cost and memory usage. In this paper, we further propose a novel word-based RNN-LM, which allows us to decode with only the word-based LM, where it provides look-ahead word probabilities to predict next characters instead of the character-based LM, leading competitive accuracy with less computation compared to the multi-level LM. We demonstrate the efficacy of the word-based RNN-LMs using a larger corpus, LibriSpeech, in addition to WSJ we used in the prior work. Furthermore, we show that the proposed model achieves 5.1 %WER for WSJ Eval'92 test set when the vocabulary size is increased, which is the best WER reported for end-to-end ASR systems on this benchmark.
翻译:本文调查了基于字的 RNN语言模型(RNN-LM)对终端到终端自动语音识别(ASR)绩效的影响。 在先前的工作中,我们提议了一个多层次LM, 将基于字符和基于字的 RNN-LM 组合成混合的CTC/基于注意的ASR。虽然这一多层次的方法在华尔街日报(WSJ)的任务中实现了显著的减少错误,但需要培训和使用两种不同的LM 来解码,这增加了计算成本和记忆使用。在本文中,我们进一步提议了一个新的基于字的RNNN-LM, 使我们能够用仅基于字的LM解码解码,提供基于字符和基于字的RNNN-LM 组合成字的字头字概率,用于预测下一个字符,而不是基于字符的LM。尽管这一多层次的方法比多层次的LM任务低得多,但我们展示了基于字的RNN-LMM(LM)的有效性,除了我们在先前的工作中使用的WJ(WJ),我们还进一步提议在W92%的模型上展示了最佳标准。