Document-level neural machine translation (DocNMT) delivers coherent translations by incorporating cross-sentence context. However, for most language pairs there's a shortage of parallel documents, although parallel sentences are readily available. In this paper, we study whether and how contextual modeling in DocNMT is transferable from sentences to documents in a zero-shot fashion (i.e. no parallel documents for student languages) through multilingual modeling. Using simple concatenation-based DocNMT, we explore the effect of 3 factors on multilingual transfer: the number of document-supervised teacher languages, the data schedule for parallel documents at training, and the data condition of parallel documents (genuine vs. backtranslated). Our experiments on Europarl-7 and IWSLT-10 datasets show the feasibility of multilingual transfer for DocNMT, particularly on document-specific metrics. We observe that more teacher languages and adequate data schedule both contribute to better transfer quality. Surprisingly, the transfer is less sensitive to the data condition and multilingual DocNMT achieves comparable performance with both back-translated and genuine document pairs.
翻译:文档级神经机翻译(DocNMT) 包含交叉感应背景,提供了一致的翻译。 但是,对于大多数语文对应方来说,平行文件短缺,尽管可以随时提供平行的句子。 在本文中,我们研究DocNMT中的环境建模是否以及如何通过多语种建模从句子转换成文件(即没有学生语言的平行文件),通过多语种建模,从句子转换成文件(即没有学生语言的平行文件)。我们利用基于简单连接的DocNMT, 探索3个因素对多语种传输的影响:文件监督型教师语言的数量,培训中平行文件的数据时间表,以及平行文件的数据条件(真实与反转)。我们在Eurparl-7和IWSLT-10数据集的实验表明,多语种传输DocNMT的可行性,特别是具体文件的衡量尺度。我们观察到,更多的教师语言和适当的数据时间表都有助于提高传输质量。令人惊讶的是,这种传输对数据条件不那么敏感,多语种DocNMT取得与背译和真实的文件配的类似性。