Algorithmic fairness is a new interdisciplinary field of study focused on how to measure whether a process, or algorithm, may unintentionally produce unfair outcomes, as well as whether or how the potential unfairness of such processes can be mitigated. Statistical discrimination describes a set of informational issues that can induce rational (i.e., Bayesian) decision-making to lead to unfair outcomes even in the absence of discriminatory intent. In this article, we provide overviews of these two related literatures and draw connections between them. The comparison illustrates both the conflict between rationality and fairness and the importance of endogeneity (e.g., "rational expectations" and "self-fulfilling prophecies") in defining and pursuing fairness. Taken in concert, we argue that the two traditions suggest a value for considering new fairness notions that explicitly account for how the individual characteristics an algorithm intends to measure may change in response to the algorithm.
翻译:分析公平性是一个新的跨学科研究领域,重点是如何衡量一个过程或算法是否无意地产生不公平的结果,以及是否或如何减轻这种过程的潜在不公平性。统计歧视描述了一系列信息问题,即使没有歧视性意图,也会导致理性(即巴伊西亚)决策导致不公平的结果。在本条中,我们概述了这两个相关的文献,并勾勒了它们之间的联系。比较既说明了合理性和公平性之间的冲突,也说明了内在特性(例如“合理预期”和“自我满足预言”)在界定和追求公平性方面的重要性。我们一致指出,这两种传统都表明有必要考虑新的公平概念,明确说明一种算法打算衡量的个人特征如何因算法而变化。