Traditionally, research in automated speech recognition has focused on local-first encoding of audio representations to predict the spoken phonemes in an utterance. Unfortunately, approaches relying on such hyper-local information tend to be vulnerable to both local-level corruption (such as audio-frame drops, or loud noises) and global-level noise (such as environmental noise, or background noise) that has not been seen during training. In this work, we introduce a novel approach which leverages a self-supervised learning technique based on masked language modeling to compute a global, multi-modal encoding of the environment in which the utterance occurs. We then use a new deep-fusion framework to integrate this global context into a traditional ASR method, and demonstrate that the resulting method can outperform baseline methods by up to 7% on Librispeech; gains on internal datasets range from 6% (on larger models) to 45% (on smaller models).
翻译:传统上,自动语音识别研究的焦点是用当地第一编码的音频表达方式来预测在一段话语中的语音。 不幸的是,依赖这种超地方信息的方法往往容易受到地方一级腐败(如音频框架滴落或响声噪音)和全球一级噪音(如环境噪音或背景噪音)的伤害。 在这项工作中,我们引入了一种新颖的方法,利用以隐蔽语言模型为基础的自我监督学习技术来计算全球多模式的发音环境编码。 我们随后使用一个新的深度聚合框架将这一全球环境纳入传统的ASR方法,并表明由此产生的方法可以在Librispeech上超过7%的基准方法;内部数据集的收益从6%(大模型)到45%(小模型)不等。