Fairness emerged as an important requirement to guarantee that Machine Learning (ML) predictive systems do not discriminate against specific individuals or entire sub-populations, in particular, minorities. Given the inherent subjectivity of viewing the concept of fairness, several notions of fairness have been introduced in the literature. This paper is a survey that illustrates the subtleties between fairness notions through a large number of examples and scenarios. In addition, unlike other surveys in the literature, it addresses the question of: which notion of fairness is most suited to a given real-world scenario and why? Our attempt to answer this question consists in (1) identifying the set of fairness-related characteristics of the real-world scenario at hand, (2) analyzing the behavior of each fairness notion, and then (3) fitting these two elements to recommend the most suitable fairness notion in every specific setup. The results are summarized in a decision diagram that can be used by practitioners and policymakers to navigate the relatively large catalog of ML.
翻译:公平是保证机器学习(ML)预测系统不歧视特定个人或整个亚群体,特别是少数群体的重要要求。鉴于观察公平概念的固有主观性,文献中引入了几种公平概念。本文调查通过大量实例和假设说明公平概念之间的微妙之处。此外,与其他文献调查不同的是,它涉及以下问题:什么公平概念最适合特定现实世界情景,为什么?我们试图回答这一问题,包括:(1) 确定当前现实世界情景中与公平有关的一系列特征,(2) 分析每个公平概念的行为,(3) 将这两个要素配合起来,在每个具体设置中建议最合适的公平概念。结果摘要载于一个决策图,供从业人员和决策者用于浏览相对大的ML目录。