Neural transducer is now the most popular end-to-end model for speech recognition, due to its naturally streaming ability. However, it is challenging to adapt it with text-only data. Factorized neural transducer (FNT) model was proposed to mitigate this problem. The improved adaptation ability of FNT on text-only adaptation data came at the cost of lowered accuracy compared to the standard neural transducer model. We propose several methods to improve the performance of the FNT model. They are: adding CTC criterion during training, adding KL divergence loss during adaptation, using a pre-trained language model to seed the vocabulary predictor, and an efficient adaptation approach by interpolating the vocabulary predictor with the n-gram language model. A combination of these approaches results in a relative word-error-rate reduction of 9.48\% from the standard FNT model. Furthermore, n-gram interpolation with the vocabulary predictor improves the adaptation speed hugely with satisfactory adaptation performance.
翻译:神经感应器是目前最受欢迎的语音识别端到端模式,这是由于它自然流动的能力。然而,要用纯文本数据来调整它却具有挑战性。为了缓解这一问题,提出了神经感应器(FNT)的因子化模型。FNT对纯文本适应数据的适应能力得到提高,其代价是比标准神经感应器模型的精确度降低。我们提出了几种方法来改进FNT模式的性能。它们是:在培训期间添加CT标准,在适应期间增加KL差异损失,使用预先训练的语言模型来播种词汇预测器,以及通过将词汇预测器与n-gram语言模型进行内插而采用有效的适应方法。这些方法的结合导致标准FNT模型相对减少9.48°Z。此外,与词汇预测器的n-groor-raty 的内插,通过令人满意的适应性能,使适应速度大大提高。