Sociodemographic biases are a common problem for natural language processing, affecting the fairness and integrity of its applications. Within sentiment analysis, these biases may undermine sentiment predictions for texts that mention personal attributes that unbiased human readers would consider neutral. Such discrimination can have great consequences in the applications of sentiment analysis both in the public and private sectors. For example, incorrect inferences in applications like online abuse and opinion analysis in social media platforms can lead to unwanted ramifications, such as wrongful censoring, towards certain populations. In this paper, we address the discrimination against people with disabilities, PWD, done by sentiment analysis and toxicity classification models. We provide an examination of sentiment and toxicity analysis models to understand in detail how they discriminate PWD. We present the Bias Identification Test in Sentiments (BITS), a corpus of 1,126 sentences designed to probe sentiment analysis models for biases in disability. We use this corpus to demonstrate statistically significant biases in four widely used sentiment analysis tools (TextBlob, VADER, Google Cloud Natural Language API and DistilBERT) and two toxicity analysis models trained to predict toxic comments on Jigsaw challenges (Toxic comment classification and Unintended Bias in Toxic comments). The results show that all exhibit strong negative biases on sentences that mention disability. We publicly release BITS Corpus for others to identify potential biases against disability in any sentiment analysis tools and also to update the corpus to be used as a test for other sociodemographic variables as well.
翻译:在情绪分析中,这些偏见可能破坏对提及个人属性的情绪预测,这些个人属性的情绪预测,这些个人属性是不带偏见的读者会认为中立的。这种歧视会在公共和私营部门应用情绪分析中产生严重后果。例如,在社交媒体平台上在线滥用和观点分析等应用中的不正确推论可能导致对特定人群的不必要后果,例如不当审查。在本文中,我们通过情绪分析和毒性分类模型处理对残疾人、PWD的歧视问题。我们审查情绪和毒性分析模型,以详细了解他们是如何歧视PWD的。我们介绍了《感官识别测试》(BITS),这是一套1 126个句子,旨在调查残疾偏见的情绪分析模式。我们利用这一材料在四种广泛使用的情绪分析工具(TextBlob、VADER、Googloud自然语言API和DTILBERT)中,以及两个经培训的毒性分析模型,以预测对Jigsaw的有毒评论(Texc)的负面挑战(Toxc vorial conview for viewal devidustryal) ex ex reviews ex ex ex aviewal ex ex ex ex bealview ex ex betradufalmentalmental