In this work, we provide a systematic and comprehensive empirical comparison of pretrained multilingual language models versus their monolingual counterparts with regard to their monolingual task performance. We study a set of nine typologically diverse languages with readily available pretrained monolingual models on a set of five diverse monolingual downstream tasks. We first aim to establish, via fair and controlled comparisons, if a gap between the multilingual and the corresponding monolingual representation of that language exists, and subsequently investigate the reason for any performance difference. To disentangle conflating factors, we train new monolingual models on the same data, with monolingually and multilingually trained tokenizers. We find that while the pretraining data size is an important factor, a designated monolingual tokenizer plays an equally important role in the downstream performance. Our results show that languages that are adequately represented in the multilingual model's vocabulary exhibit negligible performance decreases over their monolingual counterparts. We further find that replacing the original multilingual tokenizer with the specialized monolingual tokenizer improves the downstream performance of the multilingual model for almost every task and language.
翻译:在这项工作中,我们系统地、全面地比较经过训练的多语文模式与单一语文模式的单一语文模式,比较其单一语文任务的业绩。我们研究一套九种类型多样的语文,在五种不同的单一语文下游任务中,研究一套容易获得的经过训练的单一语文模式。我们首先通过公平和有控制的比较,如果该语文的多语文和相应的单一语文代表之间存在差距,我们首先力求建立这种比较,然后调查任何业绩差异的原因。为了分解混杂因素,我们用单一语文和经过多种语文训练的代用品,对同一数据进行新的单语文模式的培训。我们发现,虽然训练前的数据规模是一个重要因素,但指定的单一语文代代用品在下游业绩中也起着同样重要的作用。我们的结果显示,在多语文模式词汇中充分体现的语文的表现比单一语文对应人员少得多。我们进一步发现,用专门的单一语文代用品代替原多语文代用品使几乎所有任务和语文都提高了多语文模式的下游业绩。