Algorithmic fairness is typically studied from the perspective of predictions. Instead, here we investigate fairness from the perspective of recourse actions suggested to individuals to remedy an unfavourable classification. We propose two new fairness criteria at the group and individual level, which -- unlike prior work on equalising the average group-wise distance from the decision boundary -- explicitly account for causal relationships between features, thereby capturing downstream effects of recourse actions performed in the physical world. We explore how our criteria relate to others, such as counterfactual fairness, and show that fairness of recourse is complementary to fairness of prediction. We study theoretically and empirically how to enforce fair causal recourse by altering the classifier and perform a case study on the Adult dataset. Finally, we discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions as opposed to constraints on the classifier.
翻译:通常从预测的角度来研究算法公平性。 相反,我们在这里从向个人建议纠正不利分类的追索行动的角度来研究公平性。我们提议在群体和个人一级采用两种新的公平标准,这与以前关于平均群体距离与决定界限之间的平均平衡工作不同,明确说明各特征之间的因果关系,从而捕捉在物理世界中采取追索行动的下游影响。我们探讨我们的标准与其他人的关系,例如反事实公平性,并表明追索的公平性是对预测的公平性的补充。我们从理论上和经验上研究如何通过改变分类和对成人数据集进行案例研究来实施公平的因果追索。最后,我们讨论我们的标准所揭示的生成数据过程中的公平性侵犯行为是否最好通过社会干预而不是对分类者的限制来解决。