Multilingual pre-trained models are known to suffer from the curse of multilinguality, which causes per-language performance to drop as they cover more languages. We address this issue by introducing language-specific modules, which allows us to grow the total capacity of the model, while keeping the total number of trainable parameters per language constant. In contrast with prior work that learns language-specific components post-hoc, we pre-train the modules of our Cross-lingual Modular (X-Mod) models from the start. Our experiments on natural language inference, named entity recognition and question answering show that our approach not only mitigates the negative interference between languages, but also enables positive transfer, resulting in improved monolingual and cross-lingual performance. Furthermore, our approach enables adding languages post-hoc with no measurable drop in performance, no longer limiting the model usage to the set of pre-trained languages.
翻译:众所周知,多语言预先培训模式受到多语种的诅咒,这导致每个语言在更多语言上的表现下降。我们通过引入语言特定模块来解决这一问题,这些模块使我们能够提高模式的总能力,同时保持每个语言可培训参数的总数不变。与以往学习语言特定组成部分后热后学习的工作相比,我们从一开始就对跨语言模块(X-Mod)模式的模块进行预先培训。我们在自然语言推论、实体识别和问答方面的实验表明,我们的方法不仅减轻了语言之间的负面干扰,而且能够进行积极的转让,从而改善了单一语言和跨语言的绩效。此外,我们的方法还使得在语言后热后添加语言而不能衡量性能下降,不再将模式的使用限制在预先培训的语言组合上。