Automatic Speech Recognition (ASR) systems suffer considerably when source speech is corrupted with noise or room impulse responses (RIR). Typically, speech enhancement is applied in both mismatched and matched scenario training and testing. In matched setting, acoustic model (AM) is trained on dereverberated far-field features while in mismatched setting, AM is fixed. In recent past, mapping speech features from far-field to close-talk using denoising autoencoder (DA) has been explored. In this paper, we focus on matched scenario training and show that the proposed joint VAE based mapping achieves a significant improvement over DA. Specifically, we observe an absolute improvement of 2.5% in word error rate (WER) compared to DA based enhancement and 3.96% compared to AM trained directly on far-field filterbank features.
翻译:自动语音识别(ASR)系统在源言因噪音或室脉冲反应(RIR)而腐蚀时,自动语音识别(ASR)系统遭受了相当大的损失。通常,语音强化适用于不匹配和匹配的情景培训和测试。在相匹配的环境下,声学模型(AM)被培训使用不匹配的远方特征,而AM则被固定。在不久前,曾探索过利用拆除自动读码器(DA)绘制远方至近距离语音特征的绘图。在本文中,我们侧重于匹配的情景培训,并表明基于 VAE 的联合绘图比DA有显著的改进。具体地说,我们观察到字差率(WER)比基于DA的增强率提高了2.5%,而远方过滤库特征直接培训的MAM则提高了3.96%。