Multi-source sequence generation (MSG) is an important kind of sequence generation tasks that takes multiple sources, including automatic post-editing, multi-source translation, multi-document summarization, etc. As MSG tasks suffer from the data scarcity problem and recent pretrained models have been proven to be effective for low-resource downstream tasks, transferring pretrained sequence-to-sequence models to MSG tasks is essential. Although directly finetuning pretrained models on MSG tasks and concatenating multiple sources into a single long sequence is regarded as a simple method to transfer pretrained models to MSG tasks, we conjecture that the direct finetuning method leads to catastrophic forgetting and solely relying on pretrained self-attention layers to capture cross-source information is not sufficient. Therefore, we propose a two-stage finetuning method to alleviate the pretrain-finetune discrepancy and introduce a novel MSG model with a fine encoder to learn better representations in MSG tasks. Experiments show that our approach achieves new state-of-the-art results on the WMT17 APE task and multi-source translation task using the WMT14 test set. When adapted to document-level translation, our framework outperforms strong baselines significantly.
翻译:多源序列生成(MSG)是一个重要的序列生成任务,需要多种来源,包括自动编辑后编辑、多源翻译、多文件汇总等。 由于MSG的任务存在数据稀缺问题,而且最近经过预先培训的模式已证明对低资源下游任务有效,因此,将预先培训的序列至序列模型转换到MSG任务至关重要。虽然直接微调关于MSG任务的预先培训模式和将多种来源合并成一个单一长序列被认为是将预先培训的模式转移给MSG任务的简单方法,但我们推测,直接微调方法会导致灾难性的遗忘,完全依靠事先培训过的自我注意层获取跨源信息是不够的。因此,我们提出一个两阶段的微调方法,以缓解预先培训-纤维差异,并引入一个带有精细的编码的MSG模型,以更好地体现MSG任务。 实验表明,我们的方法在WMT17 APE的任务和多源翻译任务上取得了新的最新状态成果,并利用WMT14 大幅调整了我们的文件化基准。