Algorithmic recourse explanations inform stakeholders on how to act to revert unfavorable predictions. However, in general ML models do not predict well in interventional distributions. Thus, an action that changes the prediction in the desired way may not lead to an improvement of the underlying target. Such recourse is neither meaningful nor robust to model refits. Extending the work of Karimi et al. (2021), we propose meaningful algorithmic recourse (MAR) that only recommends actions that improve both prediction and target. We justify this selection constraint by highlighting the differences between model audit and meaningful, actionable recourse explanations. Additionally, we introduce a relaxation of MAR called effective algorithmic recourse (EAR), which, under certain assumptions, yields meaningful recourse by only allowing interventions on causes of the target.
翻译:分析性申诉解释让利益攸关方了解如何采取行动恢复不利预测,然而,一般而言,最低业务限额模型在干预性分布方面预测不佳,因此,以预期方式改变预测的行动可能不会导致基本目标的改善,因此,以预期方式改变预测的行动对模型改造既无意义,也不健全。扩大Karimi等人(2021年)的工作范围,我们建议有意义的算法追索(MAR)只建议改进预测和目标的行动。我们强调示范审计与有意义、可操作的追索解释之间的区别,以此为选择限制的理由。此外,我们引入了所谓的有效算法追索(EAR)的放松,在某些假设下,只允许对目标的原因进行干预,从而产生有意义的追索。