Automatic speech recognition (ASR) has gained a remarkable success thanks to recent advances of deep learning, but it usually degrades significantly under real-world noisy conditions. Recent works introduce speech enhancement (SE) as front-end to improve speech quality, which is proved effective but may not be optimal for downstream ASR due to speech distortion problem. Based on that, latest works combine SE and currently popular self-supervised learning (SSL) to alleviate distortion and improve noise robustness. Despite the effectiveness, the speech distortion caused by conventional SE still cannot be completely eliminated. In this paper, we propose a self-supervised framework named Wav2code to implement a generalized SE without distortions for noise-robust ASR. First, in pre-training stage the clean speech representations from SSL model are sent to lookup a discrete codebook via nearest-neighbor feature matching, the resulted code sequence are then exploited to reconstruct the original clean representations, in order to store them in codebook as prior. Second, during finetuning we propose a Transformer-based code predictor to accurately predict clean codes by modeling the global dependency of input noisy representations, which enables discovery and restoration of high-quality clean representations without distortions. Furthermore, we propose an interactive feature fusion network to combine original noisy and the restored clean representations to consider both fidelity and quality, resulting in even more informative features for downstream ASR. Finally, experiments on both synthetic and real noisy datasets demonstrate that Wav2code can solve the speech distortion and improve ASR performance under various noisy conditions, resulting in stronger robustness.
翻译:自动语音识别(ASR)因深度学习的最新进展而取得了显著的成功,但通常在实际嘈杂的环境下显著下降。最近的工作引入语音增强(SE)作为前端以改善语音质量,被证明是有效的,但由于语音扭曲问题,对下游ASR可能不是最优的。基于此,最新的工作将SE和目前流行的自监督学习(SSL)相结合,以缓解畸变并提高噪声鲁棒性。尽管这种方法是有效的,但传统SE引起的语音扭曲仍然不能完全消除。在本文中,我们提出了一个名为Wav2code的自监督框架,用于实现一种针对噪声鲁棒ASR的广义SE,无需失真即可还原干净语音表示。首先,在预训练阶段,从SSL模型中得到的干净语音表示被发送到一个离散码本中,通过最近邻特征匹配来查找离散码序列,然后利用码序列来重构原始的干净表示,以将其存储在码本中作为先验。其次,在微调期间,我们提出了一个基于Transformer的码预测器,通过建模输入嘈杂表示的全局依赖性,精确预测干净代码,从而发现和还原没有畸变的高质量干净表示。此外,我们提出了一种交互式特征融合网络,将原始的嘈杂表示和还原的干净表示结合起来,以考虑保真度和质量,从而得到更加信息丰富的下游ASR特征。最后,在合成和真实嘈杂数据集上的实验证明,Wav2code可以解决语音扭曲问题并提高各种嘈杂条件下的ASR性能,从而提高鲁棒性。