We propose a BlackBox Counterfactual Explainer, designed to explain image classification models for medical applications. Classical approaches (e.g., saliency maps) that assess feature importance do not explain "how" imaging features in important anatomical regions are relevant to the classification decision. Our framework explains the decision for a target class by gradually "exaggerating" the semantic effect of the class in a query image. We adopted a Generative Adversarial Network (GAN) to generate a progressive set of perturbations to a query image, such that the classification decision changes from its original class to its negation. We used counterfactual explanations from our framework to audit a classifier trained on a chest x-ray dataset with multiple labels. We proposed clinically-relevant quantitative metrics such as cardiothoracic ratio and the score of a healthy costophrenic recess to evaluate our explanations. We conducted a human-grounded experiment with diagnostic radiology residents to compare different styles of explanations (no explanation, saliency map, cycleGAN explanation, and our counterfactual explanation) by evaluating different aspects of explanations: (1) understandability, (2) classifier's decision justification, (3) visual quality, (d) identity preservation, and (5) overall helpfulness of an explanation to the users. Our results show that our counterfactual explanation was the only explanation method that significantly improved the users' understanding of the classifier's decision compared to the no-explanation baseline. Our metrics established a benchmark for evaluating model explanation methods in medical images. Our explanations revealed that the classifier relied on clinically relevant radiographic features for its diagnostic decisions, thus making its decision-making process more transparent to the end-user.
翻译:我们提出一个BlackBox反反影事实解释器,旨在解释医疗应用的图像分类模式; 评估特征重要性的经典方法(如显著的地图)不能解释重要解剖区域“如何”成像特征与分类决定相关; 我们的框架解释目标类别的决定,方法是在查询图像中逐渐“夸张”该类的语义效应; 我们采用了“General Aversarial Network (GAN), 以对查询图像产生一套渐进式的扰动解释, 使分类决定从最初的类别改变为否定。 我们使用框架的反事实解释, 来审计受过胸部X射线数据集培训的分类员, 多标签。 我们提出了临床相关的定量指标, 例如,在查询图像图像图像图像中“夸张” ; 我们对诊断放射对象进行了一次人为的实验,以比较不同的解释方式(没有解释、直线地图、循环GAN 解释, 以及我们的反事实解释 ), 评估不同方面的准确性解释: 我们的准确性解释, 其总体解释, 显示我们定义的准确性, 显示我们的决定的准确性 。