Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions, such as healthcare, loans, criminal detentions, and tax audits. Under the right circumstances, these algorithms can improve the efficiency and equity of decision-making. At the same time, there is a danger that the algorithms themselves could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines. To help ensure their fairness, many researchers suggest that algorithms be subject to at least one of three constraints: (1) no use of legally protected features, such as race, ethnicity, and gender; (2) equal rates of "positive" decisions across groups; and (3) equal error rates across groups. Here we show that these constraints, while intuitively appealing, often worsen outcomes for individuals in marginalized groups, and can even leave all groups worse off. The inherent trade-off we identify between formal fairness constraints and welfare improvements -- particularly for the marginalized -- highlights the need for a more robust discussion on what it means for an algorithm to be "fair". We illustrate these ideas with examples from healthcare and the criminal-legal system, and make several proposals to help practitioners design more equitable algorithms.
翻译:预测算法现在被用来帮助分配我们社会的大部分资源和制裁,例如保健、贷款、刑事拘留和税务审计。 在适当的情况下,这些算法可以提高决策的效率和公平性。同时,这种算法本身有可能巩固和加剧差异,特别是种族、族裔和性别方面的差异。为了帮助确保其公平性,许多研究人员建议,这种算法至少要受三种限制之一的限制:(1) 不使用法律保护的特点,例如种族、族裔和性别;(2) 各群体之间“积极”决定的同等比率;(3) 各群体之间的平均差错率。我们在这里表明,这些限制因素虽然直截了当地具有吸引力,但往往使处于社会边缘地位的群体个人的结果恶化,甚至可能使所有群体的情况更加糟糕。我们确定在正式的公平限制和福利改善 -- -- 特别是处于边缘地位的群体 -- -- 之间的内在权衡表明,有必要更有力地讨论算法的含义是“公平性”。我们用保健和刑事法律系统的例子来说明这些想法,并提出若干建议,以帮助从业人员设计更公平的算法。