Many recent works have studied the problem of algorithmic fairness from the perspective of predictions. Instead, here we investigate the fairness of recourse actions recommended to individuals to recover from an unfavourable classification. We propose two new fairness criteria at the group and individual level which -- unlike prior work on equalising the average distance from the decision boundary across protected groups -- explicitly account for the causal relationships between input features, thereby allowing us to capture downstream effects of recourse actions performed in the physical world. We explore how our criteria relate to others, such as counterfactual fairness, and show that fairness of recourse (both causal and non-causal) is complementary to fairness of prediction. We then investigate how to enforce fair causal recourse in the training of a classifier. Finally, we discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions as opposed to constraints on the classifier.
翻译:最近的许多著作都从预测的角度研究了算法公平问题。相反,我们在这里调查建议个人从不利分类中恢复的追索行动是否公平。我们提议在团体和个人一级提出两个新的公平标准,与以前关于平衡与受保护群体之间决定界限平均距离的工作不同,这两项标准明确说明了输入特征之间的因果关系,从而使我们能够捕捉到在物质世界中采取追索行动的下游影响。我们探讨了我们的标准与其他人的关系,例如反事实公平,并表明追索的公平(因果和非因果)与预测的公平是相辅相成的。我们然后调查如何在训练分类者时执行公平的因果追索。最后,我们讨论了社会干预而不是对分类者的限制,是否可以更好地解决我们标准所揭示的数据生成过程中的公平违规问题。