Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
翻译:大型多语种模式通常依赖100多种语言共享的单一词汇。随着这些模式在参数计数和深度方面增加,词汇大小基本保持不变。这一词汇瓶颈限制了XLM-R等多语种模式的代表性能力。 在本文中,我们引入了一种新的方法,通过不强调语言之间在词汇上很少重叠的象征性共享和分配词汇能力来实现对每种语言的充分覆盖,将语言模式推广到非常大的多语种词汇中。 与 XLM-R相比,使用我们的词汇的术语通常更具有语义意义,更短。 利用这一改进的词汇,我们培训了XLM-V,这是一个多语言模式,有100万个象征性词汇。 XLM-V 超越了 XLM-R 的功能,我们测试了从自然语言推断(XNLI)、问题回答(MLQA、XQUA、TQA、TYDiQA)到名为实体识别(Wikiann)到低资源任务(Americas NLI、MasakhAN)。