Artificial Intelligence (AI) is increasingly used to make important decisions about people. While issues of AI bias and proxy discrimination are well explored, less focus has been paid to the harms created by profiling based on groups that do not map to or correlate with legally protected groups such as sex or ethnicity. This raises a question: are existing equality laws able to protect against emergent AI-driven inequality? This article examines the legal status of algorithmic groups in North American and European non-discrimination doctrine, law, and jurisprudence and will show that algorithmic groups are not comparable to traditional protected groups. Nonetheless, these new groups are worthy of protection. I propose a new theory of harm - "the theory of artificial immutability" - that aims to bring AI groups within the scope of the law. My theory describes how algorithmic groups act as de facto immutable characteristics in practice that limit people's autonomy and prevent them from achieving important goals.
翻译:人工智能(AI)越来越多地被用来对人做出重要的决定。虽然对人工智能偏见和代理歧视的问题进行了很好的探讨,但对基于不与诸如性别或族裔等受法律保护的群体相貌或与之相关的群体进行貌相的特征分析所造成的伤害却没有给予多少重视。这提出了一个问题:现有的平等法是否能够防止新出现的由人工智能驱动的不平等?这一条审查了北美和欧洲非歧视理论、法律和判例中的算法团体的法律地位,并将表明算法团体与传统受保护群体不相等。然而,这些新团体值得保护。我提出了一个新的伤害理论,即“人为不可变性理论”,其目的是将大赦国际团体纳入法律范畴。我的理论描述了算法团体如何在实践中作为事实上不可变的特点,限制人们的自主权,阻止他们实现重要目标。