Hate speech is a form of online harassment that involves the use of abusive language, and it is commonly seen in social media posts. This sort of harassment mainly focuses on specific group characteristics such as religion, gender, ethnicity, etc and it has both societal and economic consequences nowadays. The automatic detection of abusive language in text postings has always been a difficult task, but it is lately receiving much interest from the scientific community. This paper addresses the important problem of discerning hateful content in social media. The model we propose in this work is an extension of an existing approach based on LSTM neural network architectures, which we appropriately enhanced and fine-tuned to detect certain forms of hatred language, such as racism or sexism, in a short text. The most significant enhancement is the conversion to a two-stage scheme consisting of Recurrent Neural Network (RNN) classifiers. The output of all One-vs-Rest (OvR) classifiers from the first stage are combined and used to train the second stage classifier, which finally determines the type of harassment. Our study includes a performance comparison of several proposed alternative methods for the second stage evaluated on a public corpus of 16k tweets, followed by a generalization study on another dataset. The reported results show the superior classification quality of the proposed scheme in the task of hate speech detection as compared to the current state-of-the-art.
翻译:仇恨言论是在线骚扰的一种形式,涉及使用虐待性语言,在社交媒体文章中常见。这类骚扰主要侧重于宗教、性别、种族等特定群体特征,在当今具有社会和经济后果。自动发现文本张贴中的滥用语言始终是一项困难的任务,但最近科学界对该文件很感兴趣。本文讨论了在社交媒体中辨别仇恨内容的重要问题。我们在此工作中提出的模式是基于LSTM神经网络结构的现有方法的延伸,我们适当加强和调整该方法,以在短文中发现某些形式的仇恨语言,如种族主义或性别主义等。最重要的改进是转换为由常规神经网络分类(NNN)组成的两阶段计划。所有第一阶段的Ov-Rest(OvR)分类器的输出被合并并用于培训第二阶段分类师,该分类师最终决定了骚扰的类型。我们的研究包括了对第二阶段拟议替代方法的绩效比较,以在公共层面评估了种族主义或性别主义等形式的仇恨语言,在16k的语音搜索结果中,随后又进行了一项拟议在16k搜索中进行一项数据分类。