Recent work has demonstrated the effectiveness of cross-lingual language model pretraining for cross-lingual understanding. In this study, we present the results of two larger multilingual masked language models, with 3.5B and 10.7B parameters. Our two new models dubbed XLM-R XL and XLM-R XXL outperform XLM-R by 1.8% and 2.4% average accuracy on XNLI. Our model also outperforms the RoBERTa-Large model on several English tasks of the GLUE benchmark by 0.3% on average while handling 99 more languages. This suggests pretrained models with larger capacity may obtain both strong performance on high-resource languages while greatly improving low-resource languages. We make our code and models publicly available.
翻译:最近的工作证明了跨语言语言模式的跨语言跨语言理解前培训的有效性。在本研究中,我们介绍了两个更大的多语种蒙面语言模式的结果,其中参数为3.5B和10.7B。我们的两个新模式被称为XLM-R XL和XLM-R XXL,在XNLI上优于XLM-R1.8%和平均精度2.4%的XNLI上优于XLM-R。我们的模式在GLU的若干英语任务中也比RoBERTA-Large模式平均高出0.3%,同时处理更多的99种语言。这表明,具有较大能力的预先培训模式可以在高资源语言上取得很强的成绩,同时大大改进低资源语言。我们公布了我们的代码和模型。