Although all-in-one-model multilingual neural machine translation (MNMT) has achieved remarkable progress in recent years, its selected best overall checkpoint fails to achieve the best performance simultaneously in all language pairs. It is because that the best checkpoints for each individual language pair (i.e., language-specific best checkpoints) scatter in different epochs. In this paper, we present a novel training strategy dubbed Language-Specific Self-Distillation (LSSD) for bridging the gap between language-specific best checkpoints and the overall best checkpoint. In detail, we regard each language-specific best checkpoint as a teacher to distill the overall best checkpoint. Moreover, we systematically explore three variants of our LSSD, which perform distillation statically, selectively, and adaptively. Experimental results on two widely-used benchmarks show that LSSD obtains consistent improvements towards all language pairs and achieves the state-of-the-art
翻译:尽管近些年来所有一模多语言神经机器翻译(MNMT)都取得了显著进展,但其选定的最佳总体检查站未能同时在所有语文配对中取得最佳业绩,这是因为每个语文配对的最佳检查站(即针对特定语言的最佳检查站)分散在不同时期。在本文中,我们提出了一个名为“语言特定自我学习”的新培训战略(LSSD),以弥合语言特定最佳检查站与总体最佳检查站之间的差距。我们详细地认为,每个特定语言的最佳检查站都是培养总体最佳检查站的教师。此外,我们系统地探索了我们LSD的三个变式,这些变式以静态、选择性和适应性的方式进行蒸馏。关于两个广泛使用的基准的实验结果表明,LSSD在所有语文配对方面都得到了一致的改进,并取得了最新技术。