A crucial but often neglected aspect of algorithmic fairness is the question of how we justify enforcing a certain fairness metric from a moral perspective. When fairness metrics are proposed, they are typically argued for by highlighting their mathematical properties. Rarely are the moral assumptions beneath the metric explained. Our aim in this paper is to consider the moral aspects associated with the statistical fairness criterion of independence (statistical parity). To this end, we consider previous work, which discusses the two worldviews "What You See Is What You Get" (WYSIWYG) and "We're All Equal" (WAE) and by doing so provides some guidance for clarifying the possible assumptions in the design of algorithms. We present an extension of this work, which centers on morality. The most natural moral extension is that independence needs to be fulfilled if and only if differences in predictive features (e.g. high school grades and standardized test scores are predictive of performance at university) between socio-demographic groups are caused by unjust social disparities or measurement errors. Through two counterexamples, we demonstrate that this extension is not universally true. This means that the question of whether independence should be used or not cannot be satisfactorily answered by only considering the justness of differences in the predictive features.
翻译:算法公正的一个关键但往往被忽视的方面是,我们如何从道德角度证明执行某种公平指标是合理的。当提出公平指标时,通常会通过强调其数学特性来论证。在衡量标准下,很少解释道德假设。我们本文件的目的是审议与独立统计公平标准(统计均等)相关的道德方面。为此,我们考虑了以前的工作,其中讨论了“你所看到的是什么”和“我们是人人平等”这两个世界观,并通过这样做为澄清算法设计中可能的假设提供了一些指导。我们介绍了这项工作的延伸,以道德为中心。最自然的道德延伸是,只有在预测特征(例如高中成绩和标准化测试分数预测大学成绩)的差异是由不公正的社会差异或衡量错误造成的,才需要实现独立。我们通过两个反面的事例,我们证明这一扩展并非普遍适用的。这意味着独立特征是否应当被利用或不能令人满意地解决。