Multilingual neural machine translation (MNMT) learns to translate multiple language pairs with a single model, potentially improving both the accuracy and the memory-efficiency of deployed models. However, the heavy data imbalance between languages hinders the model from performing uniformly across language pairs. In this paper, we propose a new learning objective for MNMT based on distributionally robust optimization, which minimizes the worst-case expected loss over the set of language pairs. We further show how to practically optimize this objective for large translation corpora using an iterated best response scheme, which is both effective and incurs negligible additional computational cost compared to standard empirical risk minimization. We perform extensive experiments on three sets of languages from two datasets and show that our method consistently outperforms strong baseline methods in terms of average and per-language performance under both many-to-one and one-to-many translation settings.
翻译:多语言神经机器翻译(MNMT)学会用单一模式翻译多种语言配对,这有可能提高部署模式的准确性和记忆效率。然而,语言之间的数据严重不平衡阻碍了模式在语言配对之间统一运行。在本文中,我们提议基于分布式强力优化的MNMT新学习目标,从而最大限度地减少一组语言配对之间最坏的预期损失。我们进一步展示了如何使用一个循环式最佳响应计划,切实优化大规模翻译公司这一目标,这个计划既有效,也比标准的经验风险最小化增加了微不足道的计算成本。我们从两套数据集中对三套语言进行了广泛的实验,并表明我们的方法在多对一和一对一的翻译环境中始终优于平均和每套语言业绩的强大基线方法。