Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive tasks due to their unknown decision-making process and potential biases. Counterfactual explanations are very effective in providing transparency for these black-box algorithms. Nevertheless, generating counterfactuals that can have a consistent impact on classifier outputs and yet expose interpretable feature changes is a very challenging task. We introduce a novel method to generate causal and yet interpretable counterfactual explanations for image classifiers using pretrained generative models without any re-training or conditioning. The generative models in this technique are not bound to be trained on the same data as the target classifier. We use this framework to obtain contrastive and causal sufficiency and necessity scores as global explanations for black-box classifiers. On the task of face attribute classification, we show how different attributes influence the classifier output by providing both causal and contrastive feature attributions, and the corresponding counterfactual images.
翻译:现代复杂的图像分类者尽管高度熟悉,但由于他们未知的决策过程和潜在的偏差,他们无法被信任执行敏感的任务。反事实解释对于为这些黑盒算法提供透明度非常有效。然而,产生反事实,对分类者产出产生持续影响,但暴露可解释的特征变化是一项非常具有挑战性的任务。我们引入了一种新的方法,在不经过任何再培训或调整的情况下,为使用预先训练的基因分类模型的图像分类者产生因果关系和可解释的反事实解释。这一技术中的基因化模型不一定会受到与目标分类者相同数据的培训。我们利用这个框架获得对比性和因果关系充分性和必要性分数,作为黑盒分类者的全球解释。关于面属性分类的任务,我们通过提供因果关系和对比特征属性归属以及相应的反事实图像来显示不同属性如何影响分类者的输出。