Recent advances in the pre-training of language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages are not well represented on the web and therefore excluded from the large-scale crawls used to create datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pre-training? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a new African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both to additional languages and to additional domains is to fine-tune large pre-trained models on small quantities of high-quality translation data.
翻译:语言模型培训前的最新进展利用了大规模数据集来创建多语种模型。然而,低资源语言大多被这些数据集所遗漏。这主要是因为许多广泛使用的语言在网络上的代表性不足,因此被排除在用于创建数据集的大规模爬行之外。此外,这些模型的下游用户仅限于选择最初为培训前选择的语言。这项工作调查了如何最佳利用现有的预培训模式为16种非洲语言创建低资源翻译系统。我们集中关注两个问题:(1) 如何为初始培训前未包括的语言使用预培训模式?和(2) 由此产生的翻译模式如何有效地转移到新的领域?为了回答这些问题,我们创建了涵盖16种语言的新非洲新闻资料库,其中8种语言不属于任何现有的评估数据集的一部分。我们证明,转让到其他语言和更多领域的最有效的战略是微调关于少量高质量翻译数据的大型预培训模式。