Due to the mismatch between the source and target domains, how to better utilize the biased word information to improve the performance of the automatic speech recognition model in the target domain becomes a hot research topic. Previous approaches either decode with a fixed external language model or introduce a sizeable biasing module, which leads to poor adaptability and slow inference. In this work, we propose CB-Conformer to improve biased word recognition by introducing the Contextual Biasing Module and the Self-Adaptive Language Model to vanilla Conformer. The Contextual Biasing Module combines audio fragments and contextual information, with only 0.2% model parameters of the original Conformer. The Self-Adaptive Language Model modifies the internal weights of biased words based on their recall and precision, resulting in a greater focus on biased words and more successful integration with the automatic speech recognition model than the standard fixed language model. In addition, we construct and release an open-source Mandarin biased-word dataset based on WenetSpeech. Experiments indicate that our proposed method brings a 15.34% character error rate reduction, a 14.13% biased word recall increase, and a 6.80% biased word F1-score increase compared with the base Conformer.
翻译:由于源域和目标域之间的不匹配,如何更好地利用目标域中的偏见词信息来提高自动语音识别模型的性能成为研究热点。以往的方法要么使用固定的外部语言模型进行解码,要么引入较大的偏置模块,导致适应性较差和推理速度慢。在本文中,我们提出了CB-Conformer,通过引入上下文偏置模块和自适应语言模型来改进基础的Conformer模型的偏见词识别能力。上下文偏置模块结合音频片段和上下文信息,仅使用原始Conformer模型参数的0.2%。自适应语言模型根据偏见词的召回率和精度修改其内部权重,从而更加关注偏见词,并与标准固定语言模型相比,更成功地融合到自动语音识别模型中。此外,我们基于WenetSpeech构建并发布了一个开源的中文偏见词数据集。实验表明,与基础Conformer模型相比,我们提出的方法将字符错误率降低了15.34%,偏见词召回率增加了14.13%,偏见词F1-score增加了6.80%。