Massive efforts are made to reduce biases in both data and algorithms in order to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals. This paper is the first to describe the 'speciesist bias' and investigate it in several different AI systems. Speciesist biases are learned and solidified by AI applications when they are trained on datasets in which speciesist patterns prevail. These patterns can be found in image recognition systems, large language models, and recommender systems. Therefore, AI technologies currently play a significant role in perpetuating and normalizing violence against animals. This can only be changed when AI fairness frameworks widen their scope and include mitigation measures for speciesist biases. This paper addresses the AI community in this regard and stresses the influence AI systems can have on either increasing or reducing the violence that is inflicted on animals, and especially on farmed animals.
翻译:努力减少数据和算法方面的偏差,以使大赦国际的应用公平。这些努力受到各种高调案例的推动,在这些案例中,偏向的算法决策对妇女、有色人种、少数人等造成了伤害。然而,大赦国际的公平领域仍然偏向于盲点,即对歧视动物缺乏敏感性。本文是第一个描述“物种偏见”并在不同的大赦国际系统中调查这一问题的文件。当大赦国际的应用软件在物种模式流行的数据集方面接受培训时,它们就学会了物种偏见,并巩固了这种偏见。这些模式可以在图像识别系统、大型语言模型和建议系统中找到。因此,大赦国际的技术目前在对动物的暴力行为的永久化和正常化方面起着重要作用。只有当大赦国际的公平框架扩大其范围并包括减轻物种偏见的措施时,才能改变这一点。这份文件向大赦国际界介绍了这方面的情况,并强调了大赦国际系统对动物、特别是养殖动物的暴力增加或减少的影响。