AI-driven decision-making can lead to discrimination against certain individuals or social groups based on protected characteristics/attributes such as race, gender, or age. The domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models. Still, thus far, the vast majority of the proposed methods assess fairness based on a single protected attribute, e.g. only gender or race. In reality, though, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic, leading to the so-called ``multi-dimensional discrimination'' or ``multi-dimensional fairness'' problem. While well-elaborated in legal literature, the multi-dimensionality of discrimination is less explored in the machine learning community. Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain, whereas other notions like additive and sequential discrimination are less studied or not considered thus far. In this work, we overview the different definitions of multi-dimensional discrimination/fairness in the legal domain as well as how they have been transferred/ operationalized (if) in the fairness-aware machine learning domain. By juxtaposing these two domains, we draw the connections, identify the limitations, and point out open research directions.
翻译:由大赦国际驱动的决策可能导致基于种族、性别或年龄等受保护特征/属性对某些个人或社会群体的歧视。公平意识机器学习领域侧重于理解、减轻和核算AI/ML模型中偏见的方法和算法。迄今为止,绝大多数拟议方法仍然评估基于单一受保护属性的公平性,例如仅以性别或种族为基础。虽然在现实中,人类特征是多方面的,歧视可能基于一个以上受保护特征,导致所谓的“多层面歧视”或“多层面公平”问题。虽然在法律文献中精心制定,但歧视的多面性在机器学习界没有那么深入探讨。在这方面,最近的办法主要遵循了法律领域所谓的交叉公平性定义,而其他概念,例如累加和相继歧视则较少研究或远未被考虑。在这项工作中,我们概述了法律领域多重歧视/公平性的不同定义,以及它们是如何转移/实施的。在公平性研究界中,我们从两个角度,找出了这些领域学习的界限。