Algorithmic fairness provides novel methods for promoting equitable public policy using machine learning. Yet the narrow formulation of algorithmic fairness often provides cover for algorithms that exacerbate oppression, leading critics to call for a more justice-oriented approach. This article takes up these calls and proposes a method for operationalizing a social justice orientation into algorithmic fairness. First, I argue that algorithmic fairness suffers from a significant methodological limitation: it restricts analysis to isolated decision points. Because algorithmic fairness relies on this narrow scope of analysis, it yields a reform strategy that is fundamentally constrained by the "impossibility of fairness" (an incompatibility between mathematical definitions of fairness). Second, in light of these flaws, I draw on theories of substantive equality from law and philosophy to propose an alternative methodology: "substantive algorithmic fairness." Because substantive algorithmic fairness takes a more expansive scope to fairness, it suggests reform strategies that escape from the impossibility of fairness. These strategies provide a rigorous guide for employing algorithms to alleviate social injustice. In sum, substantive algorithmic fairness presents a new direction for the field of algorithmic fairness: away from formal mathematical models of "fairness" and toward substantive evaluations of how algorithms can (and cannot) promote justice.
翻译:算法公正提供了利用机器学习促进公平公共政策的新颖方法。 然而,狭义的算法公平提法往往为加剧压迫的算法提供了掩护,导致批评者呼吁采取更注重正义的方法。 本条采纳了这些号召,并提出了将社会公正导向转化为算法公平的方法。 首先,我认为算法公平存在一个重大的方法限制:它将分析限制在孤立的决定点上。由于算法公平依赖于这一狭窄的分析范围,它产生了一种受“公平可能性”(公平数学定义不相容)根本制约的改革战略。 其次,鉴于这些缺陷,我借鉴法律和哲学中的实质性平等理论来提出一种替代方法:“实质性算法公平 ” 。 因为实质性算法公平从一个更广阔的范围到公平,它提出了摆脱不公平可能性的改革战略。 这些战略为使用算法减轻社会不公正提供了严格的指导。 总之, 实质性算法公平为算法公正领域提供了新的方向:远离正式的“ 公平” 和实质性评估, 如何能促进( ) ( ) (和实质性的) 和实质性评估。