Multilingual neural machine translation (MNMT) aims to translate multiple languages with a single model and has been proved successful thanks to effective knowledge transfer among different languages with shared parameters. However, it is still an open question which parameters should be shared and which ones need to be task-specific. Currently, the common practice is to heuristically design or search language-specific modules, which is difficult to find the optimal configuration. In this paper, we propose a novel parameter differentiation based method that allows the model to determine which parameters should be language-specific during training. Inspired by cellular differentiation, each shared parameter in our method can dynamically differentiate into more specialized types. We further define the differentiation criterion as inter-task gradient similarity. Therefore, parameters with conflicting inter-task gradients are more likely to be language-specific. Extensive experiments on multilingual datasets have demonstrated that our method significantly outperforms various strong baselines with different parameter sharing configurations. Further analyses reveal that the parameter sharing configuration obtained by our method correlates well with the linguistic proximities.
翻译:多语言神经机器翻译(MNMT)旨在用单一模式翻译多种语言,由于不同语言之间有共同参数的有效知识转让而证明是成功的,然而,这仍然是一个有待解决的问题,即哪些参数应该共享,哪些参数需要具体任务。目前,通常的做法是超速设计或搜索特定语言模块,这很难找到最佳配置。在本文中,我们提出了一个基于参数差异的新颖方法,使模型能够确定培训期间哪些参数应当是特定语言的参数。在细胞差异的启发下,我们方法中的每个共享参数可以动态地区分为更专门的类型。我们进一步将差异标准定义为跨任务梯度相似性。因此,具有相互矛盾的跨任务梯度的参数更有可能是语言特定性。关于多语言数据集的广泛实验表明,我们的方法大大超越了不同参数共享配置的不同强的基线。进一步的分析表明,我们方法获得的参数共享配置与语言相适应性非常密切。