Deep neural networks have been adopted successfully in hate speech detection problems. Nevertheless, the effect of the word embedding models on the neural network's performance has not been appropriately examined in the literature. In our study, through different detection tasks, 2-class, 3-class, and 6-class classification, we investigate the impact of both word embedding models and neural network architectures on the predictive accuracy. Our focus is on the Arabic language. We first train several word embedding models on a large-scale unlabelled Arabic text corpus. Next, based on a dataset of Arabic hate and offensive speech, for each detection task, we train several neural network classifiers using the pre-trained word embedding models. This task yields a large number of various learned models, which allows conducting an exhaustive comparison. The empirical analysis demonstrates, on the one hand, the superiority of the skip-gram models and, on the other hand, the superiority of the CNN network across the three detection tasks.
翻译:然而,文献中并未适当研究该词嵌入模型对神经网络性能的影响。在我们的研究中,我们通过不同的检测任务、2级、3级和6级分类,调查了单词嵌入模型和神经网络结构对预测准确性的影响。我们的重点是阿拉伯语。我们首先将数个词嵌入模型嵌入一个大规模、无标签的阿拉伯文本资料库。接下来,根据阿拉伯仇恨和攻击性言论的数据集,我们用预先训练的嵌入字模型培训数个神经网络分类人员。这一任务产生了大量各种学习模型,可以进行详尽的比较。实证分析一方面表明跳格模型的优势,另一方面表明CNN网络在三项检测任务中的优势。