Transparency is a fundamental requirement for decision making systems when these should be deployed in the real world. It is usually achieved by providing explanations of the system's behavior. A prominent and intuitive type of explanations are counterfactual explanations. Counterfactual explanations explain a behavior to the user by proposing actions -- as changes to the input -- that would cause a different (specified) behavior of the system. However, such explanation methods can be unstable with respect to small changes to the input -- i.e. even a small change in the input can lead to huge or arbitrary changes in the output and of the explanation. This could be problematic for counterfactual explanations, as two similar individuals might get very different explanations. Even worse, if the recommended actions differ considerably in their complexity, one would consider such unstable (counterfactual) explanations as individually unfair. In this work, we formally and empirically study the robustness of counterfactual explanations in general, as well as under different models and different kinds of perturbations. Furthermore, we propose that plausible counterfactual explanations can be used instead of closest counterfactual explanations to improve the robustness and consequently the individual fairness of counterfactual explanations.
翻译:透明度是决策系统在现实世界中应用时的基本要求。通常是通过解释系统的行为来实现的。一个突出和直观的解释类型是反事实的解释。反事实的解释向用户解释一种行为,办法是提出行动 -- -- 作为对投入的改变 -- -- 导致系统不同(特定)的行为。然而,这种解释方法在投入的小变化方面可能不稳定 -- -- 即投入的微小变化可能导致产出和解释的巨大或任意变化。这可能会对反事实的解释产生问题,因为两个类似的个人可能会得到非常不同的解释。更糟糕的是,如果所建议的行动在复杂性上差异很大,人们会认为这种不稳定(反事实)的解释是个别不公正的。在这项工作中,我们正式和实证地研究反事实解释的稳健性,一般而言,在不同的模式和不同种类的扰动下。此外,我们提议,可以使用可信的反事实的解释,而不是最接近的反事实的解释,来改进准确性,进而提高反事实解释的个人公正性。