With the wide use of deep neural networks (DNN), model interpretability has become a critical concern, since explainable decisions are preferred in high-stake scenarios. Current interpretation techniques mainly focus on the feature attribution perspective, which are limited in indicating why and how particular explanations are related to the prediction. To this end, an intriguing class of explanations, named counterfactuals, has been developed to further explore the "what-if" circumstances for interpretation, and enables the reasoning capability on black-box models. However, generating counterfactuals for raw data instances (i.e., text and image) is still in the early stage due to its challenges on high data dimensionality and unsemantic raw features. In this paper, we design a framework to generate counterfactuals specifically for raw data instances with the proposed Attribute-Informed Perturbation (AIP). By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently. Instead of directly modifying instances in the data space, we iteratively optimize the constructed attribute-informed latent space, where features are more robust and semantic. Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework, and show the superiority over other alternatives. Besides, we also introduce some practical applications based on our framework, indicating its potential beyond the model interpretability aspect.
翻译:随着深层神经网络(DNN)的广泛使用,模型解释性已成为一个令人严重关注的问题,因为在高临界情景中倾向于作出可以解释的决定,因此模型解释性已成为一个关键的问题。当前解释技术主要侧重于特征归属视角,这种视角在说明为什么和如何解释与预测有关方面受到限制。为此,开发了一个令人感兴趣的解释类别,称为反事实,以进一步探索“什么-如果”解释环境,并使黑盒模型的推理能力成为可能。然而,为原始数据实例(即文本和图像)产生反事实仍然处于早期阶段,原因是其在高数据维度和不单词原始特征方面存在挑战。在本文件中,我们设计了一个框架,专门为原始数据实例产生反事实,与拟议的属性解释性 Perturbation(AIP)相关。通过利用以不同属性为条件的基因化模型,可以有效和高效地获得理想标签的反事实。然而,我们不直接修改数据空间中的实例,而是反复优化构建的属性知情隐性空间,因为其特征在高数据维度和不易读性原始特征方面,其特征在我们的实验性框架之上展示了我们的其他结果。