Self-supervised acoustic pre-training has achieved impressive results on low-resource speech recognition tasks. It indicates that the pretrain-and-finetune paradigm is a promising direction. In this work, we propose an end-to-end model for the low-resource speech recognition, which fuses a pre-trained audio encoder (wav2vec2.0) and a pre-trained text decoder (BERT). The two modules are connected by a linear attention mechanism without parameters. A fully connected layer is introduced for hidden mapping between speech and language modalities. Besides, we design an effective fine-tuning strategy to preserve and utilize the text context modeling ability of the pre-trained decoder. Armed with this strategy, our model exhibits distinct faster convergence and better performance. Our model achieves approaching recognition performance in CALLHOME corpus (15h) as the SOTA pipeline modeling.
翻译:在低资源语音识别任务方面,自我监督的声学前培训取得了令人印象深刻的成果。 它表明,预先培训和电磁模式是一个有希望的方向。 在这项工作中,我们提出了低资源语音识别的端到端模式,该模式结合了预先培训的音频编码器(wav2vec2.0)和预先培训的文本解码器(BERT)。这两个模块通过一个没有参数的线性关注机制连接了两个模块。引入了一个完全相连的层,用于在语言和语言模式之间进行隐藏的绘图。此外,我们设计了一个有效的微调战略,以保存和利用预先培训的解码器的文本背景建模能力。有了这一战略,我们的模型展示了更快的趋同和更好的性能。我们的模型在SOTA编程模型中接近了ACTOMEposi(15h)的确认性能。