Personal rare word recognition in end-to-end Automatic Speech Recognition (E2E ASR) models is a challenge due to the lack of training data. A standard way to address this issue is with shallow fusion methods at inference time. However, due to their dependence on external language models and the deterministic approach to weight boosting, their performance is limited. In this paper, we propose training neural contextual adapters for personalization in neural transducer based ASR models. Our approach can not only bias towards user-defined words, but also has the flexibility to work with pretrained ASR models. Using an in-house dataset, we demonstrate that contextual adapters can be applied to any general purpose pretrained ASR model to improve personalization. Our method outperforms shallow fusion, while retaining functionality of the pretrained models by not altering any of the model weights. We further show that the adapter style training is superior to full-fine-tuning of the ASR models on datasets with user-defined content.
翻译:由于缺乏培训数据,在终端到终端自动语音识别(E2E ASR)模型中,个人稀有的单词识别是一个挑战。解决这一问题的标准方法是在推论时间采用浅聚变方法,然而,由于依赖外部语言模型和重力提升的确定性方法,其性能有限。在本文中,我们建议对神经传感器基于 ASR 模型的个人化进行神经环境适应器培训。我们的方法不仅可以偏向于用户定义的单词,还可以灵活地与预先培训的 ASR 模型合作。我们使用内部数据集,表明环境适应器可以应用到任何通用的预先培训的 ASR 模型,以改善个性化。我们的方法超越了未经培训的模型的浅聚变功能,同时保留了任何模型重量的功能。我们进一步表明,适应器样式培训优于对带有用户定义内容的数据集的 ASR 模型的全面调整。