Multilingual machine translation models can benefit from synergy between different language pairs, but also suffer from interference. While there is a growing number of sophisticated methods that aim to eliminate interference, our understanding of interference as a phenomenon is still limited. This work identifies the main factors that contribute to interference in multilingual machine translation. Through systematic experimentation, we find that interference (or synergy) are primarily determined by model size, data size, and the proportion of each language pair within the total dataset. We observe that substantial interference occurs mainly when the model is very small with respect to the available training data, and that using standard transformer configurations with less than one billion parameters largely alleviates interference and promotes synergy. Moreover, we show that tuning the sampling temperature to control the proportion of each language pair in the data is key to balancing the amount of interference between low and high resource language pairs effectively, and can lead to superior performance overall.
翻译:多语言机器翻译模式可以受益于不同语文对口之间的协同作用,但也受到干扰。虽然越来越多的复杂方法旨在消除干扰,但我们对干扰作为一种现象的理解仍然有限。这项工作确定了干扰多语种机器翻译的主要因素。通过系统实验,我们发现干扰(或协同增效)主要取决于模型大小、数据大小和每对语文在全部数据集中的比例。我们发现,大量干扰主要发生在以下情况下:现有培训数据很少,使用标准变压器配置的参数不到10亿,这在很大程度上减轻了干扰并促进了协同作用。 此外,我们表明,调整取样温度以控制数据中每种语文对口的比例是有效平衡低语言对口和高语言对口之间干扰程度的关键,并且能够导致总体性能的提高。