Noise robustness is essential for deploying automatic speech recognition (ASR) systems in real-world environments. One way to reduce the effect of noise interference is to employ a preprocessing module that conducts speech enhancement, and then feed the enhanced speech to an ASR backend. In this work, instead of suppressing background noise with a conventional cascaded pipeline, we employ a noise-robust representation learned by a refined self-supervised framework for noisy speech recognition. We propose to combine a reconstruction module with contrastive learning and perform multi-task continual pre-training on noisy data. The reconstruction module is used for auxiliary learning to improve the noise robustness of the learned representation and thus is not required during inference. Experiments demonstrate the effectiveness of our proposed method. Our model substantially reduces the word error rate (WER) for the synthesized noisy LibriSpeech test sets, and yields around 4.1/7.5% WER reduction on noisy clean/other test sets compared to data augmentation. For the real-world noisy speech from the CHiME-4 challenge (1-channel track), we have obtained the state of the art ASR performance without any denoising front-end. Moreover, we achieve comparable performance to the best supervised approach reported with only 16% of labeled data.
翻译:在现实世界环境中部署自动语音识别系统(ASR)的噪音强度对于在现实世界环境中部署自动语音识别系统至关重要。减少噪音干扰效应的一种方法是使用一个前处理模块,进行语音增强,然后将强化的语音反馈到一个 ASR 后端。在这项工作中,我们使用一个噪音-机器人代表器,而不是用常规的级联管道抑制背景噪音,而是用一个改良的自我监督框架学习噪音语音识别。我们提议将重建模块与对比性学习相结合,并针对噪音数据进行多重任务持续培训。重建模块用于辅助学习,以提高所学到的语音强度,因此在推断过程中不需要这样做。实验显示了我们拟议方法的有效性。我们的模型大大降低了合成噪音LibriSpeech测试组的字误率(WER),并比数据增强率低4.1%/7.5%/其他测试组的噪音清洁/其它测试组。对于来自CHIME-4挑战(1-Channel轨道)的现场噪音演讲,我们只获得了艺术ASR的状态,而没有报告任何前端标签可比较性业绩。此外,我们只实现了16个最佳的状态。