Personalization in multi-turn dialogs has been a long standing challenge for end-to-end automatic speech recognition (E2E ASR) models. Recent work on contextual adapters has tackled rare word recognition using user catalogs. This adaptation, however, does not incorporate an important cue, the dialog act, which is available in a multi-turn dialog scenario. In this work, we propose a dialog act guided contextual adapter network. Specifically, it leverages dialog acts to select the most relevant user catalogs and creates queries based on both -- the audio as well as the semantic relationship between the carrier phrase and user catalogs to better guide the contextual biasing. On industrial voice assistant datasets, our model outperforms both the baselines - dialog act encoder-only model, and the contextual adaptation, leading to the most improvement over the no-context model: 58% average relative word error rate reduction (WERR) in the multi-turn dialog scenario, in comparison to the prior-art contextual adapter, which has achieved 39% WERR over the no-context model.
翻译:在多轮对话中进行个性化处理一直是端到端自动语音识别(E2E ASR)模型的长期挑战。最近的上下文适配器研究已经通过使用用户目录解决了稀有单词的识别问题。然而,这种适应并没有整合重要的线索——对话行为,这在多轮对话场景下是可用的。在本文中,我们提出了一种对话行为引导的上下文适配器网络。具体来说,它利用对话行为选择最相关的用户目录,并根据载体短语和用户目录之间的语义关系,基于音频和查询创建,以更好地指导上下文偏差。在工业语音助理数据集上,我们的模型优于基线——仅对话行为编码器模型和上下文调整,导致最大的改善是相对于没有上下文条件的模型:在对比上下文适配器之前取得了39% WERR 的情况下,多轮对话场景中平均相对单词错误率减少了58%。