Despite an abundance of fairness-aware machine learning (fair-ml) algorithms, the moral justification of how these algorithms enforce fairness metrics is largely unexplored. The goal of this paper is to elicit the moral implications of a fair-ml algorithm. To this end, we first consider the moral justification of the fairness metrics for which the algorithm optimizes. We present an extension of previous work to arrive at three propositions that can justify the fairness metrics. Different from previous work, our extension highlights that the consequences of predicted outcomes are important for judging fairness. We draw from the extended framework and empirical ethics to identify moral implications of the fair-ml algorithm. We focus on the two optimization strategies inherent to the algorithm: group-specific decision thresholds and randomized decision thresholds. We argue that the justification of the algorithm can differ depending on one's assumptions about the (social) context in which the algorithm is applied - even if the associated fairness metric is the same. Finally, we sketch paths for future work towards a more complete evaluation of fair-ml algorithms, beyond their direct optimization objectives.
翻译:尽管存在大量公平意识机器学习(公平- ml)算法,但这些算法如何执行公平度量的道德理由在很大程度上尚未探讨。本文件的目的是要引出公平-ml算法的道德影响。 为此,我们首先考虑算法优化所依据的公平度量的道德理由。 我们提出以往工作的延伸,以达成三个可以证明公平度量的论据。 与以往的工作不同, 我们的扩展强调, 预测结果的后果对判断公平性很重要。 我们从扩展的框架和经验道德中提取了公平-ml算法的道德影响。 我们侧重于算法所固有的两种优化战略: 集团特定决定阈值和随机决定阈值。 我们主张,算法的理由可能因一个人对使用算法的(社会)背景的假设而有所不同, 即使相关的公平度量值是相同的。 最后, 我们勾画未来工作的道路, 以便更全面地评价公平-ml算法, 而不是直接的优化目标。