The shift of public debate to the digital sphere has been accompanied by a rise in online hate speech. While many promising approaches for hate speech classification have been proposed, studies often focus only on a single language, usually English, and do not address three key concerns: post-deployment performance, classifier maintenance and infrastructural limitations. In this paper, we introduce a new human-in-the-loop BERT-based hate speech classification pipeline and trace its development from initial data collection and annotation all the way to post-deployment. Our classifier, trained using data from our original corpus of over 422k examples, is specifically developed for the inherently multilingual setting of Switzerland and outperforms with its F1 score of 80.5 the currently best-performing BERT-based multilingual classifier by 5.8 F1 points in German and 3.6 F1 points in French. Our systematic evaluations over a 12-month period further highlight the vital importance of continuous, human-in-the-loop classifier maintenance to ensure robust hate speech classification post-deployment.
翻译:在公众辩论转向数字领域的同时,在线仇恨言论也出现了上升。虽然提出了许多充满希望的仇恨言论分类方法,但研究往往只侧重于单一语言,通常是英语,没有解决三大问题:部署后的表现、分类维护和基础设施限制。在本文中,我们引入了一个新的基于在线仇恨言论分类管道,并跟踪其从最初的数据收集和批注到部署后的发展过程。我们的分类师,利用我们最初的422k实例中的数据进行了培训,专门为瑞士固有的多语言设置开发了我们的分类师,其F1分为80.5,目前业绩最佳的BERT多语言分类师以德语为5.8个F1分,法语为3.6个F1分。我们在12个月期间进行的系统评估进一步突出了连续、人中语言分类的维护对于确保强有力的仇恨言论分类后部署的至关重要性。