Hate speech is a specific type of controversial content that is widely legislated as a crime that must be identified and blocked. However, due to the sheer volume and velocity of the Twitter data stream, hate speech detection cannot be performed manually. To address this issue, several studies have been conducted for hate speech detection in European languages, whereas little attention has been paid to low-resource South Asian languages, making the social media vulnerable for millions of users. In particular, to the best of our knowledge, no study has been conducted for hate speech detection in Roman Urdu text, which is widely used in the sub-continent. In this study, we have scrapped more than 90,000 tweets and manually parsed them to identify 5,000 Roman Urdu tweets. Subsequently, we have employed an iterative approach to develop guidelines and used them for generating the Hate Speech Roman Urdu 2020 corpus. The tweets in the this corpus are classified at three levels: Neutral-Hostile, Simple-Complex, and Offensive-Hate speech. As another contribution, we have used five supervised learning techniques, including a deep learning technique, to evaluate and compare their effectiveness for hate speech detection. The results show that Logistic Regression outperformed all other techniques, including deep learning techniques for the two levels of classification, by achieved an F1 score of 0.906 for distinguishing between Neutral-Hostile tweets, and 0.756 for distinguishing between Offensive-Hate speech tweets.
翻译:仇恨言论是一种有争议的内容,被广泛定为一种必须查明和阻止的犯罪,但由于Twitter数据流的数量和速度之大,无法人工检测仇恨言论。为了解决这一问题,已经对欧洲语言的仇恨言论检测进行了几项研究,但很少关注南亚语言的低资源,使社交媒体对数百万用户而言易受伤害。特别是,根据我们的知识,没有就罗马乌尔都文本中的仇恨言论检测进行研究,该文本在次大陆中广泛使用。我们的研究中,已经筛选了90,000多条推特,并用手动将其切除,以识别5,000条罗曼乌尔都语的推特。随后,我们采用迭代方法制定准则,并使用这些准则生成罗曼乌尔都2020年的仇恨言论。本杂志中的推特被分为三个层次:中立-住客、简单-复合和进攻性-Hate演讲。作为另一项贡献,我们使用了五种监督性学习技术,包括深层次的学习技术,用来评估和比较其用于检测仇恨言论的实效,包括5,000条罗曼乌尔都语。随后,我们采用了迭方法来制定准则,并使用这些指南用于生成罗曼·乌尔都文2020年版的推特。