In this paper, we introduce DOCmT5, a multilingual sequence-to-sequence language model pretrained with large scale parallel documents. While previous approaches have focused on leveraging sentence-level parallel data, we try to build a general-purpose pretrained model that can understand and generate long documents. We propose a simple and effective pretraining objective - Document reordering Machine Translation (DrMT), in which the input documents that are shuffled and masked need to be translated. DrMT brings consistent improvements over strong baselines on a variety of document-level generation tasks, including over 12 BLEU points for seen-language-pair document-level MT, over 7 BLEU points for unseen-language-pair document-level MT and over 3 ROUGE-1 points for seen-language-pair cross-lingual summarization. We achieve state-of-the-art (SOTA) on WMT20 De-En and IWSLT15 Zh-En document translation tasks. We also conduct extensive analysis on various factors for document pretraining, including (1) The effects of pretraining data quality and (2) The effects of combining mono-lingual and cross-lingual pretraining. We plan to make our model checkpoints publicly available.
翻译:在本文中,我们引入了DOCMT5, 一种多语种序列到顺序语言模式,先经过大规模平行文件的培训,这是多语种,先经过大规模平行文件的培训。虽然以前的做法侧重于利用判决一级的平行数据,但我们试图建立一个通用的预先培训模式,能够理解和生成长文件。我们提出了一个简单而有效的培训前目标----文件重新排序机器翻译(DrMT),其中需要翻译被打乱和掩盖的输入文件。DrMT为各种文件级的生成任务带来了持续改进,包括12个以上的BLEU点,用于可见语言文件级MT,7个以上的BLEU点,用于隐藏语言文件级MT和3个以上的ROUGE-1点。我们实现了WMT20 De-En和IWSLT15 Zh-En文件翻译任务的最新(SOTOTA),我们还广泛分析了文件前培训的各种因素,包括:(1) 培训前数据质量的影响和(2) 将单语和跨语言检查站合并到公开培训前计划的效果。