The BLOOM model is a large open-source multilingual language model capable of zero-shot learning, but its pretraining was limited to 46 languages. To improve its zero-shot performance on unseen languages, it is desirable to adapt BLOOM, but previous works have only explored adapting small language models. In this work, we apply existing language adaptation strategies to BLOOM and benchmark its zero-shot prompting performance on eight new languages. We find language adaptation to be effective at improving zero-shot performance in new languages. Surprisingly, adapter-based finetuning is more effective than continued pretraining for large models. In addition, we discover that prompting performance is not significantly affected by language specifics, such as the writing system. It is primarily determined by the size of the language adaptation data. We also add new languages to BLOOMZ, which is a multitask finetuned version of BLOOM capable of following task instructions zero-shot. We find including a new language in the multitask fine-tuning mixture to be the most effective method to teach BLOOMZ a new language. We conclude that with sufficient training data language adaptation can generalize well to diverse languages. Our code is available at \url{https://github.com/bigscience-workshop/multilingual-modeling/}.
翻译:BLOOM模式是一个大型的开放源码多语种模式,能够零点学习,但其初步培训则限于46种语言。为了改进其在不见语言上的零点表现,有必要调整BLOOM,但先前的工作只探讨了如何调整小语言模式。在这项工作中,我们将现有语言适应战略应用于BLOM,并将其零点推动性表现基准化为八种新语言。我们发现语言适应性适应性能够有效地改进新语言零点表现。令人惊讶的是,基于适应性的微调比继续大模式的预培训更为有效。此外,我们发现促进性表现不会受到语言特性(如写法系统)的重大影响。这主要取决于语言适应数据的规模。我们还在BLOOMZ添加了新的语言,这是BLOOMM的多任务微调化版本,能够遵守任务指示零点。我们发现在多塔斯微调调混合剂中添加新的语言,是教给BLOOMZ一种最有效的方法。我们的结论是,有足够的培训性语言适应性语言/多语言模式/多语言/多语言/多语言模式。我们可以使用。