In recent years, multilingual pre-trained language models have gained prominence due to their remarkable performance on numerous downstream Natural Language Processing tasks (NLP). However, pre-training these large multilingual language models requires a lot of training data, which is not available for African Languages. Active learning is a semi-supervised learning algorithm, in which a model consistently and dynamically learns to identify the most beneficial samples to train itself on, in order to achieve better optimization and performance on downstream tasks. Furthermore, active learning effectively and practically addresses real-world data scarcity. Despite all its benefits, active learning, in the context of NLP and especially multilingual language models pretraining, has received little consideration. In this paper, we present AfroLM, a multilingual language model pretrained from scratch on 23 African languages (the largest effort to date) using our novel self-active learning framework. Pretrained on a dataset significantly (14x) smaller than existing baselines, AfroLM outperforms many multilingual pretrained language models (AfriBERTa, XLMR-base, mBERT) on various NLP downstream tasks (NER, text classification, and sentiment analysis). Additional out-of-domain sentiment analysis experiments show that \textbf{AfroLM} is able to generalize well across various domains. We release the code source, and our datasets used in our framework at https://github.com/bonaventuredossou/MLM_AL.
翻译:近年来,多语种经过培训的语文模式因其在众多下游自然语言处理任务(NLP)方面的出色表现而越来越突出。然而,这些大型多语种模式的预培训需要大量培训数据,而非洲语言则不具备这些数据。积极学习是一种半监督的学习算法,在这种算法中,一个模式一贯和动态地学习如何确定最有益的样本,以进行自己培训,从而更好地优化和完成下游任务。此外,积极学习并切实解决现实世界数据稀缺问题。尽管取得了各种好处,但是在NLP背景下,特别是在多语言模式培训前,积极学习却很少得到考虑。在本文件中,我们介绍AfroLMM,一种从头到尾的多语种语言模式,从头到头到尾都是非洲23种语言(迄今为止最大的努力),使用我们新的自我积极的学习框架。在数据集方面,(14x)大大低于现有的基线,AfroLM-M-BERTA/MLM-BERT)比许多经过预先培训的语文模式(Afrib、NERM-BERT)在各种下游任务(NER、文本分类和情绪分析中,我们使用的一般数据库/M&Lfroma/Mxl)中,在一般数据源中进行了额外的分析。