We propose a BlackBox \emph{Counterfactual Explainer} that is explicitly developed for medical imaging applications. Classical approaches (e.g. saliency maps) assessing feature importance do not explain \emph{how} and \emph{why} variations in a particular anatomical region is relevant to the outcome, which is crucial for transparent decision making in healthcare application. Our framework explains the outcome by gradually \emph{exaggerating} the semantic effect of the given outcome label. Given a query input to a classifier, Generative Adversarial Networks produce a progressive set of perturbations to the query image that gradually changes the posterior probability from its original class to its negation. We design the loss function to ensure that essential and potentially relevant details, such as support devices, are preserved in the counterfactually generated images. We provide an extensive evaluation of different classification tasks on the chest X-Ray images. Our experiments show that a counterfactually generated visual explanation is consistent with the disease's clinical relevant measurements, both quantitatively and qualitatively.
翻译:我们提议为医疗成像应用而明确开发的 BlackBox \ emph{Compactal Explainer} 。 古典方法( 例如显微的地图) 评估特征的重要性并不解释 \ emph{how} 和\ emph{wh} 特定解剖区域的变异与结果相关, 这对于医疗应用中的透明决策至关重要。 我们的框架通过逐渐的\ emph{exagger} 来解释给给定结果标签的语义效应来解释结果。 在向分类器的查询输入中, 基因反向反向反向网络生成的直观解释与疾病的临床相关测量相一致, 包括定量和定性测量。 我们设计损失函数以确保关键和潜在相关的细节, 如支持装置, 保存在反现实生成的图像中。 我们对胸部 X- Ray 图像的不同分类任务进行广泛的评估。 我们的实验显示, 反实际生成的直观解释与疾病临床相关测量相一致。