Language models (LMs) significantly improve the recognition accuracy of end-to-end (E2E) models on words rarely seen during training, when used in either the shallow fusion or the rescoring setups. In this work, we introduce LMs in the learning of hybrid autoregressive transducer (HAT) models in the discriminative training framework, to mitigate the training versus inference gap regarding the use of LMs. For the shallow fusion setup, we use LMs during both hypotheses generation and loss computation, and the LM-aware MWER-trained model achieves 10\% relative improvement over the model trained with standard MWER on voice search test sets containing rare words. For the rescoring setup, we learn a small neural module to generate per-token fusion weights in a data-dependent manner. This model achieves the same rescoring WER as regular MWER-trained model, but without the need for sweeping fusion weights.
翻译:语言模型(LMS) 大大提高了培训期间很少见的字端对端(E2E)模型的识别准确度, 无论是在浅质聚合还是重新校准设置中使用。 在这项工作中, 我们引入LMS 学习混合自动递减传感器模型(HAT) 在歧视性培训框架中学习混合自动递减传感器模型(HAT)模型, 以缩小使用LMS的培训和推论差距。 对于浅质聚合设置, 我们使用LMM(LM-aware MWER) 培训模型在假体生成和损失计算中都使用LMM(E), 而LM-aware MWER 培训模型比在含有稀有单词的语音搜索测试组中用标准MWER培训的模型取得了10 的相对改进。 对于重新校准设置, 我们学习了一个小型神经模块, 以数据独立的方式生成一吨聚变重量。 这个模型与常规MWER培训模型一样, 但不需要扫描聚变重量。