This paper studies a novel pre-training technique with unpaired speech data, Speech2C, for encoder-decoder based automatic speech recognition (ASR). Within a multi-task learning framework, we introduce two pre-training tasks for the encoder-decoder network using acoustic units, i.e., pseudo codes, derived from an offline clustering model. One is to predict the pseudo codes via masked language modeling in encoder output, like HuBERT model, while the other lets the decoder learn to reconstruct pseudo codes autoregressively instead of generating textual scripts. In this way, the decoder learns to reconstruct original speech information with codes before learning to generate correct text. Comprehensive experiments on the LibriSpeech corpus show that the proposed Speech2C can relatively reduce the word error rate (WER) by 19.2% over the method without decoder pre-training, and also outperforms significantly the state-of-the-art wav2vec 2.0 and HuBERT on fine-tuning subsets of 10h and 100h. We release our code and model at https://github.com/microsoft/SpeechT5/tree/main/Speech2C.
翻译:本文研究一种新颖的培训前技术,其语言数据为不光彩的语音数据,Speople2C, 用于基于编码器的自动语音识别(ASR)。在多任务学习框架内,我们引入了两种使用声学单位,即假代码,从离线集群模型中衍生出来的编码网络的编码计算前培训任务。一个是预测假代码,在编码输出(如HuBERT模型)中,通过掩码语言模型进行假代码模拟,另一个是让解码器学会自动重建假代码,而不是生成文本脚本。在这种方式中,解码器学会用代码重建原始语音信息,然后学习生成正确文本。关于LibriSpeech系统的全面实验显示,拟议的语音2C可以相对地将字差率(WER)降低19.2%,而不用解码预培训模式,还大大地超越了状态的 wav2vec 2.0 和 HuBERT(HuBERT) 在10和100小时的精细调子集中,我们发布了我们的软件和模型。