Deep Learning has become a very valuable tool in different fields, and no one doubts the learning capacity of these models. Nevertheless, since Deep Learning models are often seen as black boxes due to their lack of interpretability, there is a general mistrust in their decision-making process. To find a balance between effectiveness and interpretability, Explainable Artificial Intelligence (XAI) is gaining popularity in recent years, and some of the methods within this area are used to generate counterfactual explanations. The process of generating these explanations generally consists of solving an optimization problem for each input to be explained, which is unfeasible when real-time feedback is needed. To speed up this process, some methods have made use of autoencoders to generate instant counterfactual explanations. Recently, a method called Deep Guided Counterfactual Explanations (DGCEx) has been proposed, which trains an autoencoder attached to a classification model, in order to generate straightforward counterfactual explanations. However, this method does not ensure that the generated counterfactual instances are close to the data manifold, so unrealistic counterfactual instances may be generated. To overcome this issue, this paper presents Distribution Aware Deep Guided Counterfactual Explanations (DA-DGCEx), which adds a term to the DGCEx cost function that penalizes out of distribution counterfactual instances.
翻译:深层学习已成为不同领域一个非常宝贵的工具,没有人怀疑这些模型的学习能力。然而,深层学习模型由于缺乏可解释性,往往被视为黑盒,因此在决策过程中普遍存在着不信任。为了在有效性和可解释性之间找到平衡,近年来,可以解释的人工智能(XAI)越来越受欢迎,而且该领域的一些方法被用来产生反事实解释。这些解释的过程通常包括解决每个要解释的输入的优化问题,在需要实时反馈时,这是不可行的。为了加快这一进程,有些方法已经利用自动编码器生成了即时反事实解释。最近,提出了一种名为“深导反事实解释(DGCExExExex)”的方法,该方法培养了与分类模型相连的自动编码器,以便产生直接反事实解释。然而,这一方法并不能确保生成的反事实实例接近数据多重,因此可能产生不切实际的反事实实例。为了克服这一问题,本文件展示了“深导反事实解释”(D)的分发成本,从而增加了“深深导反数据”功能。