Counterfactual examples are an appealing class of post-hoc explanations for machine learning models. Given input $x$ of class $y_1$, its counterfactual is a contrastive example $x^\prime$ of another class $y_0$. Current approaches primarily solve this task by a complex optimization: define an objective function based on the loss of the counterfactual outcome $y_0$ with hard or soft constraints, then optimize this function as a black-box. This "deep learning" approach, however, is rather slow, sometimes tricky, and may result in unrealistic counterfactual examples. In this work, we propose a novel approach to deal with these problems using only two gradient computations based on tractable probabilistic models. First, we compute an unconstrained counterfactual $u$ of $x$ to induce the counterfactual outcome $y_0$. Then, we adapt $u$ to higher density regions, resulting in $x^{\prime}$. Empirical evidence demonstrates the dominant advantages of our approach.
翻译:反事实实例是机器学习模型的后热解释颇具吸引力的一类。 如果输入值为1美元,则其反事实是一个对比性的例子。 如果输入值为1美元,那么其反事实就是一个对比性的例子。 目前的方法主要通过复杂的优化来解决这个问题:根据反现实结果的损失来定义一个客观的函数:以硬或软限制为单位的$_0美元,然后将这一功能优化为黑盒。然而,这种“深学习”方法相当缓慢,有时很棘手,可能导致不现实的反事实例子。在这项工作中,我们建议采用一种新颖的方法,仅用基于可移动概率模型的两种梯度计算来处理这些问题。首先,我们计算出一个不加限制的反现实的美元x美元,以诱导反实际结果为单位的$0美元。然后,我们将美元调整到更高的密度区域,结果为 $xprime}。