The exponential growths of social media and micro-blogging sites not only provide platforms for empowering freedom of expressions and individual voices, but also enables people to express anti-social behavior like online harassment, cyberbullying, and hate speech. Numerous works have been proposed to utilize textual data for social and anti-social behavior analysis, by predicting the contexts mostly for highly-resourced languages like English. However, some languages are under-resourced, e.g., South Asian languages like Bengali, that lack computational resources for accurate natural language processing (NLP). In this paper, we propose an explainable approach for hate speech detection from the under-resourced Bengali language, which we called DeepHateExplainer. Bengali texts are first comprehensively preprocessed, before classifying them into political, personal, geopolitical, and religious hates using a neural ensemble method of transformer-based neural architectures (i.e., monolingual Bangla BERT-base, multilingual BERT-cased/uncased, and XLM-RoBERTa). Important~(most and least) terms are then identified using sensitivity analysis and layer-wise relevance propagation~(LRP), before providing human-interpretable explanations. Finally, we compute comprehensiveness and sufficiency scores to measure the quality of explanations w.r.t faithfulness. Evaluations against machine learning~(linear and tree-based models) and neural networks (i.e., CNN, Bi-LSTM, and Conv-LSTM with word embeddings) baselines yield F1-scores of 78%, 91%, 89%, and 84%, for political, personal, geopolitical, and religious hates, respectively, outperforming both ML and DNN baselines.
翻译:社交媒体和微博客网站的指数增长不仅提供了增强言论和个人声音自由的平台,还使人们能够表达在线骚扰、网络欺凌和仇恨言论等反社会行为。许多工作都提议使用大量文字数据进行社会和反社会行为分析,主要用于英语等资源丰富的语言。然而,一些语言资源不足,例如孟加拉语等南亚语言缺乏准确自然语言处理的计算资源。 在本文中,我们建议从资源不足的孟加拉语(我们称之为DeepHateExplainer)来解释仇恨言论的检测方法。孟加拉语文本首先经过全面预处理,然后使用基于变异器的神经结构(即单语Bangla DERT- base、多种语言的BERT-cased/uncasedal 和 XLM-LBERTA), 关键词(最起码和最起码的),然后通过敏感性分析、个人读性读性读性读性解释,最后层次解释,以及人类读性读性解释。