In recent years there have been considerable advances in pre-trained language models, where non-English language versions have also been made available. Due to their increasing use, many lightweight versions of these models (with reduced parameters) have also been released to speed up training and inference times. However, versions of these lighter models (e.g., ALBERT, DistilBERT) for languages other than English are still scarce. In this paper we present ALBETO and DistilBETO, which are versions of ALBERT and DistilBERT pre-trained exclusively on Spanish corpora. We train several versions of ALBETO ranging from 5M to 223M parameters and one of DistilBETO with 67M parameters. We evaluate our models in the GLUES benchmark that includes various natural language understanding tasks in Spanish. The results show that our lightweight models achieve competitive results to those of BETO (Spanish-BERT) despite having fewer parameters. More specifically, our larger ALBETO model outperforms all other models on the MLDoc, PAWS-X, XNLI, MLQA, SQAC and XQuAD datasets. However, BETO remains unbeaten for POS and NER. As a further contribution, all models are publicly available to the community for future research.
翻译:近年来,培训前语言模式(如ALBERT、DistillBETO等)取得了长足的进步,这些语言模式也提供了非英语版本,由于这些模式的使用越来越多,许多轻量版(参数减少)也已经发布,以加快培训和推断时间,然而,这些非英语语言的较轻模式(如ALBERT、DiptilBERT)的版本仍然稀缺。在本文件中,我们介绍了ALBETO和DistillBETETO的版本,它们是ALBERT和DistillBERT的版本,仅对西班牙公司进行了预先培训。我们培训了数种ALBETO的版本,范围从5M到223M参数和1DistilBETO的版本,有67M参数减少。我们在GLUES的基准中评估了我们的模型,其中包括各种西班牙语的自然语言理解任务。结果显示,我们的轻量模型取得了与ERO(西班牙-BERT)的竞争性结果,尽管参数较少。更具体地说,我们较大的ALBETO的模型超越了MO、PAWS-X、XLLI、MQA、MQA和GEAR公司未来研究模式的所有其他模式。