Massively multilingual models pretrained on abundant corpora with self-supervision achieve state-of-the-art results in a wide range of natural language processing tasks. In machine translation, multilingual pretrained models are often fine-tuned on parallel data from one or multiple language pairs. Multilingual fine-tuning improves performance on medium- and low-resource languages but requires modifying the entire model and can be prohibitively expensive. Training a new set of adapters on each language pair or training a single set of adapters on all language pairs while keeping the pretrained model's parameters frozen has been proposed as a parameter-efficient alternative. However, the former do not permit any sharing between languages, while the latter share parameters for all languages and have to deal with negative interference. In this paper, we propose training language-family adapters on top of a pretrained multilingual model to facilitate cross-lingual transfer. Our model consistently outperforms other adapter-based approaches. We also demonstrate that language-family adapters provide an effective method to translate to languages unseen during pretraining.
翻译:大量多语种模式在大量具有自我监督功能的公司中先经过培训,然后在一系列广泛的自然语言处理任务中取得最新成果。在机器翻译中,多语言先行模式往往根据一种或多种语言对口的平行数据进行微调。多语种微调提高了中、低语言的性能,但需要修改整个模式,而且费用太高。在每种语言配对上培训一套新的适配者,或对所有语言配对进行一套单一的适配者培训,同时将预先培训的模式参数固定下来,这是提议的一种具有参数效率的替代办法。但是,前者不允许在语言之间进行任何共享,而后者则分享所有语言的参数,并不得不应对负面干扰。在本文件中,我们提议在经过培训的多语言模式之外培训语言-家庭调适者,以便利跨语种转让。我们的模式始终优于其他基于适应者的做法。我们还表明,语言-家庭调适配者提供了一种有效的方法,在培训前将语言翻译成不见的语言。