Arabic is a Semitic language which is widely spoken with many dialects. Given the success of pre-trained language models, many transformer models trained on Arabic and its dialects have surfaced. While these models have been compared with respect to downstream NLP tasks, no evaluation has been carried out to directly compare the internal representations. We probe how linguistic information is encoded in Arabic pretrained models, trained on different varieties of Arabic language. We perform a layer and neuron analysis on the models using three intrinsic tasks: two morphological tagging tasks based on MSA (modern standard Arabic) and dialectal POS-tagging and a dialectal identification task. Our analysis enlightens interesting findings such as: i) word morphology is learned at the lower and middle layers ii) dialectal identification necessitate more knowledge and hence preserved even in the final layers, iii) despite a large overlap in their vocabulary, the MSA-based models fail to capture the nuances of Arabic dialects, iv) we found that neurons in embedding layers are polysemous in nature, while the neurons in middle layers are exclusive to specific properties.
翻译:阿拉伯语是一种犹太语言,许多方言都广泛使用。鉴于预先培训的语言模式的成功,许多经过阿拉伯语及其方言培训的变压器模型已经出现。虽然这些模型已经与下游国家语言方案的任务进行了比较,但还没有进行直接比较内部陈述的评估。我们探索了语言信息如何以经过培训的阿拉伯语模式编码,并培训了不同的阿拉伯语种类。我们用三种内在任务对这些模型进行了层次和神经分析:两个基于管理协议(现代标准阿拉伯语)和方言POS标签和辩证识别任务的形态标记任务。我们的分析揭示了一些有趣的发现,例如:一) 文字形态学在中下层学习,二) 辩证识别需要更多知识,因此即使在最后层也保留下来。三) 尽管语言词汇存在很大的重叠,但基于管理协议的模式未能捕捉到阿拉伯语方言的微妙之处,四) 我们发现嵌入层的神经在自然中是多元的,而中间层的神经是专属于特定特性的。