Pre-trained transformers are now the de facto models in Natural Language Processing given their state-of-the-art results in many tasks and languages. However, most of the current models have been trained on languages for which large text resources are already available (such as English, French, Arabic, etc.). Therefore, there are still a number of low-resource languages that need more attention from the community. In this paper, we study the Algerian dialect which has several specificities that make the use of Arabic or multilingual models inappropriate. To address this issue, we collected more than one million Algerian tweets, and pre-trained the first Algerian language model: DziriBERT. When compared with existing models, DziriBERT achieves better results, especially when dealing with the Roman script. The obtained results show that pre-training a dedicated model on a small dataset (150 MB) can outperform existing models that have been trained on much more data (hundreds of GB). Finally, our model is publicly available to the community.
翻译:预先培训的变压器现在已成为自然语言处理中事实上的模型,因为其在许多任务和语言方面都取得了最先进的成果,但是,目前的大多数模型已经就已经具备大量文字资源的语言(如英文、法文、阿拉伯文等)进行了培训,因此,仍然有一些低资源语言需要社区给予更多的关注。在本文中,我们研究了阿尔及利亚方言,该方言有若干特点,使得使用阿拉伯语或多语言模式不合适。为了解决这一问题,我们收集了100多万阿尔及利亚推文,并预先培训了第一个阿尔及利亚语言模型:DziriBERT。与现有模型相比,DziriBERT取得了更好的结果,特别是在处理罗马文字时。获得的结果表明,对小型数据集(150 MB)专门模型进行的培训前,能够超越在更多数据方面受过培训的现有模型(100万GB)。最后,我们的模式向社区公开提供。