Increasing concerns about disparate effects of AI have motivated a great deal of work on fair machine learning. Existing works mainly focus on independence- and separation-based measures (e.g., demographic parity, equality of opportunity, equalized odds), while sufficiency-based measures such as predictive parity are much less studied. This paper considers predictive parity, which requires equalizing the probability of success given a positive prediction among different protected groups. We prove that, if the overall performances of different groups vary only moderately, all fair Bayes-optimal classifiers under predictive parity are group-wise thresholding rules. Perhaps surprisingly, this may not hold if group performance levels vary widely; in this case we find that predictive parity among protected groups may lead to within-group unfairness. We then propose an algorithm we call FairBayes-DPP, aiming to ensure predictive parity when our condition is satisfied. FairBayes-DPP is an adaptive thresholding algorithm that aims to achieve predictive parity, while also seeking to maximize test accuracy. We provide supporting experiments conducted on synthetic and empirical data.
翻译:对大赦国际不同影响的日益关切促使人们就公平机器学习开展大量工作,现有工作主要侧重于基于独立和分离的措施(如人口均等、机会平等、机会均等、机会均等),而预测性均等等基于充分性的措施则少得多的研究。本文考虑了预测性均等,根据不同受保护群体之间的积极预测,这要求平等成功概率。我们证明,如果不同群体的总体表现略有不同,那么所有处于预测性均等下的公平巴耶斯-最佳分类者都是群体明智的门槛规则。也许令人惊讶的是,如果群体业绩水平差异很大,这可能无法维持。在这种情况下,我们发现受保护群体之间的预测性均等可能导致群体内部的不公平。我们然后提议一种算法,我们称之为FairBayes-DPP,目的是在满足我们的条件时确保预测性均等。FairBayes-DPP是一种适应性门槛算法,目的是实现预测性均等,同时力求最大限度地达到测试准确性。我们为合成数据和实证数据进行的实验提供支持。