Real-world applications of machine learning tools in high-stakes domains are often regulated to be fair, in the sense that the predicted target should satisfy some quantitative notion of parity with respect to a protected attribute. However, the exact tradeoff between fairness and accuracy is not entirely clear, even for the basic paradigm of classification problems. In this paper, we characterize an inherent tradeoff between statistical parity and accuracy in the classification setting by providing a lower bound on the sum of group-wise errors of any fair classifiers. Our impossibility theorem could be interpreted as a certain uncertainty principle in fairness: if the base rates differ among groups, then any fair classifier satisfying statistical parity has to incur a large error on at least one of the groups. We further extend this result to give a lower bound on the joint error of any (approximately) fair classifiers, from the perspective of learning fair representations. To show that our lower bound is tight, assuming oracle access to Bayes (potentially unfair) classifiers, we also construct an algorithm that returns a randomized classifier which is both optimal and fair. Interestingly, when the protected attribute can take more than two values, an extension of this lower bound does not admit an analytic solution. Nevertheless, in this case, we show that the lower bound can be efficiently computed by solving a linear program, which we term as the TV-Barycenter problem, a barycenter problem under the TV-distance. On the upside, we prove that if the group-wise Bayes optimal classifiers are close, then learning fair representations leads to an alternative notion of fairness, known as the accuracy parity, which states that the error rates are close between groups. Finally, we also conduct experiments on real-world datasets to confirm our theoretical findings.
翻译:在高海拔地区,机器学习工具在现实世界中的应用往往被监管为公平,因为预测的目标应该满足某种数量上对等的概念,从而在受保护的属性方面达到某种数量上的平等概念。然而,公平与准确之间的精确权衡并不完全清楚,即使是分类问题的基本范式也是如此。在本文中,我们从学习公平表述的角度,将统计对等与分类设置的准确性之间的内在权衡确定为一种内在的权衡,方法是对任何公平分类师的集团错误加一个较低的约束。我们不可能的理论理论可被解释为某种公平性上的某种不确定原则:如果基率各组之间有差异,那么任何公平的分类师达到统计对等的量化概念至少要给其中的一个集团带来一个很大的错误。我们进一步扩展这一结果,从任何(约)公平分类师的联合错误中,从学习公平表述的角度来看,给任何(约)公平分类师之间的联合错误下限。为了表明我们较低的界限是紧凑的,假设或缩短任何公平分类师之间的访问,我们还构建一种算法,这既最优、也公平。有趣的算法,当受保护的分数比两个值更接近两个值时,我们所了解的数值时,一个更接近的算法的算法的数值,我们更接近的顺序的顺序的推算算算算算算法是更接近的,我们更接近的顺序的顺序的,我们更接近的顺序是更接近的顺序的顺序的顺序的顺序的顺序的顺序的顺序的顺序的,,我们最后的顺序是更接近的顺序是更接近的顺序是更接近的计算。