Encoder pre-training is promising in end-to-end Speech Translation (ST), given the fact that speech-to-translation data is scarce. But ST encoders are not simple instances of Automatic Speech Recognition (ASR) or Machine Translation (MT) encoders. For example, we find ASR encoders lack the global context representation, which is necessary for translation, whereas MT encoders are not designed to deal with long but locally attentive acoustic sequences. In this work, we propose a Stacked Acoustic-and-Textual Encoding (SATE) method for speech translation. Our encoder begins with processing the acoustic sequence as usual, but later behaves more like an MT encoder for a global representation of the input sequence. In this way, it is straightforward to incorporate the pre-trained models into the system. Also, we develop an adaptor module to alleviate the representation inconsistency between the pre-trained ASR encoder and MT encoder, and a multi-teacher knowledge distillation method to preserve the pre-training knowledge. Experimental results on the LibriSpeech En-Fr and MuST-C En-De show that our method achieves the state-of-the-art performance of 18.3 and 25.2 BLEU points. To our knowledge, we are the first to develop an end-to-end ST system that achieves comparable or even better BLEU performance than the cascaded ST counterpart when large-scale ASR and MT data is available.
翻译:校对前的编码器在终端到终端语音翻译(ST)中很有希望,因为语音到翻译数据很少。但ST 编码器并不是简单的自动语音识别或机器翻译(MT)编码器。例如,我们发现 ASR 编码器缺乏全球背景代表,而MT 编码器的设计不是为了处理长期但当地注意的声学序列。在这项工作中,我们提议了语音到翻译(SATE)的Sackic-Acustic-Text(SATE)方法。我们的编码器开始像往常一样处理声学序列,但后来的行为更像是输入序列全球代表的 MT 编码器。这样,将预先培训的模型纳入系统比较简单,而MT 编码器的设计不是为了处理长期但当地注意的声学序列。此外,我们开发了一个调整器模块,以缓解预先训练的 ASR 调控码器和 MTMT 解码(SA-NE-E-E-E-N-C) 的多教师知识蒸馏方法比我们18级的E-S-S-T-T-T-S-T-T-T-SL 和B-SAL-T-SAL-T-SAL-SAL-T-T-T-T-SAL-SAL-T-T-T-T-T-SAL-T-T-T-S-T-SD-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-T-SL-T-SL-SL-SL-SL-T-SL-SL-SL-SL-T-T-T-SL-SL-S-SL-T-T-T-T-T-T-T-SL-SL-T-SL-SL-T-T-SL-SL-SL-SL-SL-T-T-SL-T-T-T-T-T-T-T-SL-T-S-S-S-S-S-SL-T-T-S-S-T-T-T-T-T-T-S