While many recent works have studied the problem of algorithmic fairness from the perspective of predictions, here we investigate the fairness of recourse actions recommended to individuals to recover from an unfavourable classification. To this end, we propose two new fairness criteria at the group and individual level which---unlike prior work on equalising the average distance from the decision boundary across protected groups---are based on a causal framework that explicitly models relationships between input features, thereby allowing to capture downstream effects of recourse actions performed in the physical world. We explore how our criteria relate to others, such as counterfactual fairness, and show that fairness of recourse is complementary to fairness of prediction. We then investigate how to enforce fair recourse in the training of the classifier. Finally, we discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions and structural changes to the system, as opposed to constraints on the classifier.
翻译:虽然许多最近的著作都从预测的角度研究了算法公平问题,但我们在这里调查了建议个人从不利分类中恢复的追索行动是否公平。为此,我们提议在团体和个人一级采用两种新的公平标准,即与先前不同,在明确模拟输入特征之间关系的因果框架基础上,在保护群体之间平均距离平等决定界限的先前工作,从而能够捕捉实际世界中追索行动的下游影响。我们探讨我们的标准与其他人的关系,例如反事实公平,并表明追索的公平是对预测的公平的补充。然后我们研究如何在分类者培训中执行公平追索。最后,我们讨论社会干预和系统结构变革,而不是对分类者的限制,是否可以更好地解决我们标准所揭示的数据产生过程中的公平违规问题。