In this paper, we explore the benefits of incorporating context into a Recurrent Neural Network (RNN-T) based Automatic Speech Recognition (ASR) model to improve the speech recognition for virtual assistants. Specifically, we use meta information extracted from the time at which the utterance is spoken and the approximate location information to make ASR context aware. We show that these contextual information, when used individually, improves overall performance by as much as 3.48% relative to the baseline and when the contexts are combined, the model learns complementary features and the recognition improves by 4.62%. On specific domains, these contextual signals show improvements as high as 11.5%, without any significant degradation on others. We ran experiments with models trained on data of sizes 30K hours and 10K hours. We show that the scale of improvement with the 10K hours dataset is much higher than the one obtained with 30K hours dataset. Our results indicate that with limited data to train the ASR model, contextual signals can improve the performance significantly.
翻译:在本文中,我们探讨了将上下文纳入基于常规神经网络(RNN-T)的自动语音识别(ASR)模型的好处,以改善虚拟助理的语音识别。具体地说,我们使用从发言时间和大致位置信息中提取的元信息,使ASR背景了解。我们显示,这些背景信息,如果单独使用,与基线相比,总体性能提高3.48%,当环境结合时,模型学习互补特征,识别率提高4.62%。在特定领域,这些背景信号显示改进率高达11.5%,而没有显著降低他人的性能。我们用经过30K小时和10K小时数据培训的模型进行了实验。我们显示,10K小时数据集的改进程度远远高于30K小时数据集的改进程度。我们的结果显示,如果用于培训ASR模型的数据有限,背景信号可以显著改善性能。