With the growth of social media, the spread of hate speech is also increasing rapidly. Social media are widely used in many countries. Also Hate Speech is spreading in these countries. This brings a need for multilingual Hate Speech detection algorithms. Much research in this area is dedicated to English at the moment. The HASOC track intends to provide a platform to develop and optimize Hate Speech detection algorithms for Hindi, German and English. The dataset is collected from a Twitter archive and pre-classified by a machine learning system. HASOC has two sub-task for all three languages: task A is a binary classification problem (Hate and Not Offensive) while task B is a fine-grained classification problem for three classes (HATE) Hate speech, OFFENSIVE and PROFANITY. Overall, 252 runs were submitted by 40 teams. The performance of the best classification algorithms for task A are F1 measures of 0.51, 0.53 and 0.52 for English, Hindi, and German, respectively. For task B, the best classification algorithms achieved F1 measures of 0.26, 0.33 and 0.29 for English, Hindi, and German, respectively. This article presents the tasks and the data development as well as the results. The best performing algorithms were mainly variants of the transformer architecture BERT. However, also other systems were applied with good success
翻译:随着社交媒体的增长,仇恨言论的传播也在迅速增加。在许多国家,社交媒体被广泛使用。仇恨言论也在这些国家蔓延。这就需要多语种仇恨言论检测算法。这一领域的大量研究目前专门针对英语。HASOC轨道打算提供一个平台,以开发和优化印地语、德语和英语的仇恨言论检测算法。数据集来自Twitter档案,由机器学习系统预先分类。HASOC对所有三种语言都有两种子任务:任务A是一个二进制分类问题(Hate和不进攻性),任务B是三种(HATE)仇恨言论(Ompensive 和 Profanity)的精细分类问题。总体而言,40个团队提交了252批。任务A的最佳分类算法表现为:英语、印地语和德语的0.51、0.53和0.52个最佳分类算法分别为0.51、0.53和0.52。关于任务B,最佳分类算法为0.26、0.33和0.29,而任务B是英语、印地语和德语的精细分类方法。这一文章主要展示了B结构的改进结果。