Integrating external language models (LMs) into end-to-end (E2E) models remains a challenging task for domain-adaptive speech recognition. Recently, internal language model estimation (ILME)-based LM fusion has shown significant word error rate (WER) reduction from Shallow Fusion by subtracting a weighted internal LM score from an interpolation of E2E model and external LM scores during beam search. However, on different test sets, the optimal LM interpolation weights vary over a wide range and have to be tuned extensively on well-matched validation sets. In this work, we perform LM fusion in the minimum WER (MWER) training of an E2E model to obviate the need for LM weights tuning during inference. Besides MWER training with Shallow Fusion (MWER-SF), we propose a novel MWER training with ILME (MWER-ILME) where the ILME-based fusion is conducted to generate N-best hypotheses and their posteriors. Additional gradient is induced when internal LM is engaged in MWER-ILME loss computation. During inference, LM weights pre-determined in MWER training enable robust LM integrations on test sets from different domains. Experimented with 30K-hour trained transformer transducers, MWER-ILME achieves on average 8.8% and 5.8% relative WER reductions from MWER and MWER-SF training, respectively, on 6 different test sets
翻译:将外部语言模型(LMS)纳入端对端语音识别(E2E)模型仍然是一项具有挑战性的任务。最近,内部语言模型估计(ILME)基于LM的LM聚合显示,通过在射线搜索中从E2E模型的内插和外部LM分数中减去一个加权内部LM分数,将E2E模型模型和浅色分数从浅色熔化中减去一个加权内部LM分数,从而大大降低浅色分数。然而,在不同测试组中,最优LMMM的内插数因范围而异,必须广泛调整于相匹配的校准校准校准校准校准校准的校准组合。在最低WER(MWER)模型中进行LMER最小化(WER)的最小值(WER)培训)培训,在MWER-MIL中分别进行精准的校准中,在MER-MLML中,在内部测试模型中进行精准的校准中,在LMER-CLMLDMLD中进行精准的校准中,在内部测测测测测测测中进行30的校准前的校准。