Detecting online hate is a difficult task that even state-of-the-art models struggle with. Typically, hate speech detection models are evaluated by measuring their performance on held-out test data using metrics such as accuracy and F1 score. However, this approach makes it difficult to identify specific model weak points. It also risks overestimating generalisable model performance due to increasingly well-evidenced systematic gaps and biases in hate speech datasets. To enable more targeted diagnostic insights, we introduce HateCheck, a suite of functional tests for hate speech detection models. We specify 29 model functionalities motivated by a review of previous research and a series of interviews with civil society stakeholders. We craft test cases for each functionality and validate their quality through a structured annotation process. To illustrate HateCheck's utility, we test near-state-of-the-art transformer models as well as two popular commercial models, revealing critical model weaknesses.
翻译:发现网上仇恨是一项困难的任务,即使是最先进的模式也与之斗争。通常,仇恨言论检测模型是通过使用精确度和F1分等量度标准来测量其通过长期测试数据的表现来评估的。然而,这一方法使得难以确定具体的模型弱点。由于仇恨言论数据集中越来越有证据的系统性差距和偏见,也有可能过高地估计一般模式业绩。为了能够更有针对性的诊断洞察力,我们引入了仇恨检查(Hate Check),这是一套针对仇恨言论检测模型的功能测试。我们根据对以往研究的审查以及与民间社会利益攸关方的一系列访谈,确定了29个模型功能。我们为每个功能设计了测试案例,并通过一个结构化的说明过程验证了它们的质量。为了说明HateCheck的效用,我们测试了近状态的变异模型以及两个受欢迎的商业模型,揭示了关键的模型弱点。