Social stereotypes negatively impact individuals' judgements about different groups and may have a critical role in how people understand language directed toward minority social groups. Here, we assess the role of social stereotypes in the automated detection of hateful language by examining the relation between individual annotator biases and erroneous classification of texts by hate speech classifiers. Specifically, in Study 1 we investigate the impact of novice annotators' stereotypes on their hate-speech-annotation behavior. In Study 2 we examine the effect of language-embedded stereotypes on expert annotators' aggregated judgements in a large annotated corpus. Finally, in Study 3 we demonstrate how language-embedded stereotypes are associated with systematic prediction errors in a neural-network hate speech classifier. Our results demonstrate that hate speech classifiers learn human-like biases which can further perpetuate social inequalities when propagated at scale. This framework, combining social psychological and computational linguistic methods, provides insights into additional sources of bias in hate speech moderation, informing ongoing debates regarding fairness in machine learning.
翻译:社会陈规定型观念对个人对不同群体的判断产生消极影响,在人们如何理解针对少数社会群体的语言方面可能起到关键作用。在这里,我们评估社会陈规定型观念在自动发现仇恨语言方面的作用,方法是通过审查仇恨言论分类者个人注解偏见和对文本错误分类之间的关系。具体来说,在研究报告1中,我们调查新注解定型观念对其仇恨言论发声行为的影响。在研究报告2中,我们研究了语言组合对专家注解材料中专家注解者综合判断的影响。最后,在研究报告3中,我们展示了语言组合的陈规定型观念如何与神经网络仇恨言论分类者系统预测错误相联系。我们的结果表明,仇恨言论分类者学会了类似人类的偏见,如果大规模传播,这种偏见会进一步延续社会不平等。这一框架结合社会心理和计算语言方法,揭示了仇恨言论温和的更多偏见来源,为目前关于机器学习公平性的辩论提供了信息。