Language models (LMs) have introduced a major paradigm shift in Natural Language Processing (NLP) modeling where large pre-trained LMs became integral to most of the NLP tasks. The LMs are intelligent enough to find useful and relevant representations of the language without any supervision. Perhaps, these models are used to fine-tune typical NLP tasks with significantly high accuracy as compared to the traditional approaches. Conversely, the training of these models requires a massively large corpus that is a good representation of the language. English LMs generally perform better than their other language counterparts, due to the availability of massive English corpora. This work elaborates on the design and development of a large Arabic corpus. It consists of over 500 GB of Arabic cleaned text targeted at improving cross-domain knowledge and downstream generalization capability of large-scale language models. Moreover, the corpus is utilized in the training of a large Arabic LM. In order to evaluate the effectiveness of the LM, a number of typical NLP tasks are fine-tuned. The tasks demonstrate a significant boost from 4.5 to 8.5% when compared to tasks fine-tuned on multi-lingual BERT (mBERT). To the best of my knowledge, this is currently the largest clean and diverse Arabic corpus ever collected.
翻译:语言模型(LMS)在自然语言处理模型(NLP)中引入了重大范式转变,在自然语言处理模型(NLP)中,大型经过预先培训的LMS成为大多数非语言处理任务的有机组成部分。LMS非常聪明,足以在没有任何监督的情况下发现语言的有用和相关表现。也许,这些模型被用来对典型的NLP任务进行微调,比传统方法的精准程度高得多。相反,这些模型的培训需要大量大量的内容,能够很好地代表语言。英语LMS一般比其他语言的对口单位表现更好。由于有大量的英语公司,这项工作详细说明了大型阿拉伯文体的设计和开发。它包括500GB级以上的阿拉伯清洁文字,旨在改进跨域知识和大型语文模型的下游普及能力。此外,在大型阿拉伯语LM培训中使用了这一类材料。为了评价语言处理的有效性,一些典型的NLPM任务得到了精确的调整。任务表明,与目前最清洁和最多样化的阿拉伯语知识相比,我所收集的最佳的知识是最佳的。</s>