Multilingual neural machine translation aims at learning a single translation model for multiple languages. These jointly trained models often suffer from performance degradation on rich-resource language pairs. We attribute this degeneration to parameter interference. In this paper, we propose LaSS to jointly train a single unified multilingual MT model. LaSS learns Language Specific Sub-network (LaSS) for each language pair to counter parameter interference. Comprehensive experiments on IWSLT and WMT datasets with various Transformer architectures show that LaSS obtains gains on 36 language pairs by up to 1.2 BLEU. Besides, LaSS shows its strong generalization performance at easy extension to new language pairs and zero-shot translation.LaSS boosts zero-shot translation with an average of 8.3 BLEU on 30 language pairs. Codes and trained models are available at https://github.com/NLP-Playground/LaSS.
翻译:多语言神经机器翻译旨在学习多种语言的单一翻译模式。这些经过联合培训的模型往往因丰富资源语言对口的性能退化而受损。我们把这种退化归因于参数干扰。我们在本文件中建议拉SS联合培训单一统一的多语言MT模型。拉SS为每个语言对口学习语言特定子网络(LaSS)以对抗参数干扰。关于IWSLT和WMT数据集的各种变异结构的全面实验显示,拉SS在36对语言上取得了高达1.2BLEU的收益。此外,拉SS在容易扩展为新语言对口和零速译时展示了强大的普及性性表现。拉SS促进零速翻译,平均为30对语言对口8.3 BLEU。可在https://github.com/NLP-Playground/LSS上查阅守则和经过培训的模型。