A recent paper (Hedden 2021) has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden 's argument does not hold for the most common kind of predictions used in data science, which are about people and based on data from similar people; we call these human-group-based practices. We argue that there is a morally salient distinction between human-group-based practices and those that are based on data of only one person, which we call human-individual-based practices. Thus, what may be a necessary condition for the fairness of human-group-based practices may not be a necessary condition for the fairness of human-individual-based practices, on which Hedden 's argument is based. Accordingly, the group fairness metrics discussed in the machine learning literature may still be relevant for most applications of prediction-based decision making.
翻译:最近的一份论文(Hedden 2021)指出,机器学习文献中讨论的多数群体公平性限制不是公平预测的必要条件,因此没有真正的公平度量尺度。这通过讨论一个公平预测的特殊案例得到证明。在我们的论文中,我们表明,Hedden的论点并不支持数据科学中最常用的预测类型,即关于人和基于类似人的数据;我们称之为这些基于人类群体的做法。我们争辩说,基于人类群体的做法与仅仅基于一个人的数据的做法之间在道德上存在着明显的区别,我们称之为基于个人的做法。因此,基于人类群体的做法的公平性的必要条件可能不是基于人类个人做法的公平性的必要条件,而赫登的论点正是基于这种公平性。因此,机器学习文献中讨论的群体公平度量度可能仍然适用于大多数基于预测的决策。