Counterfactual explanations are emerging as an attractive option for providing recourse to individuals adversely impacted by algorithmic decisions. As they are deployed in critical applications (e.g. law enforcement, financial lending), it becomes important to ensure that we clearly understand the vulnerabilities of these methods and find ways to address them. However, there is little understanding of the vulnerabilities and shortcomings of counterfactual explanations. In this work, we introduce the first framework that describes the vulnerabilities of counterfactual explanations and shows how they can be manipulated. More specifically, we show counterfactual explanations may converge to drastically different counterfactuals under a small perturbation indicating they are not robust. Leveraging this insight, we introduce a novel objective to train seemingly fair models where counterfactual explanations find much lower cost recourse under a slight perturbation. We describe how these models can unfairly provide low-cost recourse for specific subgroups in the data while appearing fair to auditors. We perform experiments on loan and violent crime prediction data sets where certain subgroups achieve up to 20x lower cost recourse under the perturbation. These results raise concerns regarding the dependability of current counterfactual explanation techniques, which we hope will inspire investigations in robust counterfactual explanations.
翻译:反事实解释正在成为向受到算法决定不利影响的个人提供求助的有吸引力的选择。由于这些解释被用于关键应用(例如执法、金融借贷),因此必须确保我们清楚地了解这些方法的脆弱性,并找到解决这些问题的方法。然而,对反事实解释的脆弱性和缺点了解甚少。在这项工作中,我们引入第一个框架,说明反事实解释的脆弱性,并表明如何加以操纵。更具体地说,我们显示反事实解释可能与在小扰动下出现的截然不同的反事实相汇,表明它们不健全。我们利用这一洞察,提出一个新的目标,培训似乎公平的模型,使反事实解释在略为扰动下发现成本低得多的追索方法。我们描述这些模型如何不公平地为数据中的特定分组提供低成本的追索权,同时对审计员来说公平。我们进行贷款和暴力犯罪预测数据实验,将某些分组在扰动下达到20x较低的成本追索权。这些结果引起了对当前反事实解释技术的可靠性的关切,我们希望这些解释将激励对事实进行有力的调查。