Internal Language Model Estimation (ILME) based language model (LM) fusion has been shown significantly improved recognition results over conventional shallow fusion in both intra-domain and cross-domain speech recognition tasks. In this paper, we attempt to apply our ILME method to cross-domain code-switching speech recognition (CSSR) work. Specifically, our curiosity comes from several aspects. First, we are curious about how effective the ILME-based LM fusion is for both intra-domain and cross-domain CSSR tasks. We verify this with or without merging two code-switching domains. More importantly, we train an end-to-end (E2E) speech recognition model by means of merging two monolingual data sets and observe the efficacy of the proposed ILME-based LM fusion for CSSR. Experimental results on SEAME that is from Southeast Asian and another Chinese Mainland CS data set demonstrate the effectiveness of the proposed ILME-based LM fusion method.
翻译:以内部语言模型估算为基础的语言模型(LM)融合在常规浅质的内域和跨域语音识别任务中,其识别效果显著改善。在本文件中,我们试图将我们的ILME方法应用于跨域代码转换语音识别工作。具体地说,我们的好奇心来自几个方面。首先,我们对基于ILME的LM融合对于内域和跨域 CSSR 任务的有效性感到好奇。我们核实这一点是否合并了两个代码转换区域。更重要的是,我们通过合并两套单一语言数据集来培训终端对终端(E2E)语音识别模式,并观察拟议的CSSR基于IME的LM融合效果。东南亚和中国大陆另一套CS的SEAME实验结果证明了拟议的基于ILME的LM融合方法的有效性。