The exponential increase in the use of the Internet and social media over the last two decades has changed human interaction. This has led to many positive outcomes, but at the same time it has brought risks and harms. While the volume of harmful content online, such as hate speech, is not manageable by humans, interest in the academic community to investigate automated means for hate speech detection has increased. In this study, we analyse six publicly available datasets by combining them into a single homogeneous dataset and classify them into three classes, abusive, hateful or neither. We create a baseline model and we improve model performance scores using various optimisation techniques. After attaining a competitive performance score, we create a tool which identifies and scores a page with effective metric in near-real time and uses the same as feedback to re-train our model. We prove the competitive performance of our multilingual model on two langauges, English and Hindi, leading to comparable or superior performance to most monolingual models.
翻译:在过去二十年中,互联网和社交媒体使用量的急剧增长改变了人类互动。这带来了许多积极的结果,但同时也带来了风险和伤害。虽然互联网上有害内容的数量,例如仇恨言论的数量不能由人管理,但学术界对调查自动手段以发现仇恨言论的兴趣却增加了。在本研究中,我们分析了六个公开提供的数据集,将它们合并成单一的数据集,将其分为三个类别,即虐待、仇恨或两者兼而有之。我们创建了一个基线模型,并利用各种优化技术改进模型业绩评分。在取得竞争性业绩评分后,我们创建了一个工具,在近实时确定和评分一个有效计量的页面,并使用反馈来重新塑造我们的模型。我们证明了我们在英语和印地语两种语言的多语模式上的竞争性表现,导致最单语模式的可比或优异性表现。