Real-world applications of machine learning tools in high-stakes domains are often regulated to be fair, in the sense that the predicted target should satisfy some quantitative notion of parity with respect to a protected attribute. However, the exact tradeoff between fairness and accuracy is not entirely clear, even for the basic paradigm of classification problems. In this paper, we characterize an inherent tradeoff between statistical parity and accuracy in the classification setting by providing a lower bound on the sum of group-wise errors of any fair classifiers. Our impossibility theorem could be interpreted as a certain uncertainty principle in fairness: if the base rates differ among groups, then any fair classifier satisfying statistical parity has to incur a large error on at least one of the groups. We further extend this result to give a lower bound on the joint error of any (approximately) fair classifiers, from the perspective of learning fair representations. To show that our lower bound is tight, assuming oracle access to Bayes (potentially unfair) classifiers, we also construct an algorithm that returns a randomized classifier that is both optimal (in terms of accuracy) and fair. Interestingly, when the protected attribute can take more than two values, an extension of this lower bound does not admit an analytic solution. Nevertheless, in this case, we show that the lower bound can be efficiently computed by solving a linear program, which we term as the TV-Barycenter problem, a barycenter problem under the TV-distance. On the upside, we prove that if the group-wise Bayes optimal classifiers are close, then learning fair representations leads to an alternative notion of fairness, known as the accuracy parity, which states that the error rates are close between groups. Finally, we also conduct experiments on real-world datasets to confirm our theoretical findings.
翻译:在高目标域中,机器学习工具在现实世界中的应用往往被监管为公平,因为预测的目标应该满足某种数量上对等的概念,即:在受保护的属性方面,预测的目标应该满足某种数量上的平等概念。然而,公平与准确之间的精确权衡并不完全清楚,即使是分类问题的基本范式也是如此。在本文中,我们从学习公平表述的角度来描述分类设置中统计均等与准确之间的内在权衡。通过对任何公平分类者群体错误的总和提供较低的约束,我们不可能的理论原理可以被解释为某种在公平方面的某种不确定原则:如果基本比率不同,那么任何公平的分类实现统计均等的公平分类者必须至少在其中一组群体中出现一个更准确的对等性表示。我们进一步扩展了这一结果,从任何(约)公平分类者的联合错误的界限上,从学习公平表述的角度来看,我们较低界限的界限是任何(潜在不公平的)分类者。为了表明我们更接近的分类者,我们也可以构建一种在最优的(准确的)和公平性的计算方法上返回一个不同的分类。有趣的是,如果保守的分类在最接近的轨道上,我们可以更接近的计算,那么,我们也可以在更接近的计算中可以证明一个更接近的周期内,一个我们这个过程的顺序中可以证明一个我们可以证明一个更接近的顺序的计算,一个我们这个过程能显示一个更接近的计算,我们这个过程的周期的周期的计算。