As interpretability has been pointed out as the obstacle to the adoption of Deep Neural Networks (DNNs), there is an increasing interest in solving a transparency issue to guarantee the impressive performance. In this paper, we demonstrate the efficiency of recent attribution techniques to explain the diagnostic decision by visualizing the significant factors in the input image. By utilizing the characteristics of objectness that DNNs have learned, fully decomposing the network prediction visualizes clear localization of target lesion. To verify our work, we conduct our experiments on Chest X-ray diagnosis with publicly accessible datasets. As an intuitive assessment metric for explanations, we report the performance of intersection of Union between visual explanation and bounding box of lesions. Experiment results show that recently proposed attribution methods visualize the more accurate localization for the diagnostic decision compared to the traditionally used CAM. Furthermore, we analyze the inconsistency of intentions between humans and DNNs, which is easily obscured by high performance. By visualizing the relevant factors, it is possible to confirm that the criterion for decision is in line with the learning strategy. Our analysis of unmasking machine intelligence represents the necessity of explainability in the medical diagnostic decision.
翻译:正如人们指出的,作为采用深神经网络(DNN)的障碍,可解释性是阻碍采用深神经网络(DNN)的障碍,人们越来越有兴趣解决透明度问题,以保证令人印象深刻的性能。在本文中,我们展示了最近的归因技术的效率,通过对输入图像中的重要因素进行视觉化来解释诊断决定。通过利用DNN所学的物体特性,使网络预测完全分解,将目标损伤明显地定位为视觉化。为了核查我们的工作,我们用可公开查阅的数据集对Chest X射线诊断进行实验。作为解释的直观评估指标,我们报告了联盟在视觉解释和受损害框之间相互交叉的性能。实验结果显示,最近提出的归因方法将诊断决定的归因与传统上使用的CAM相比较的更准确性地方化。此外,我们分析了人类和DNNNN之间意图的不一致性,这很容易被高性能所掩盖。通过对相关因素进行直观化,我们有可能确认决定的标准与学习战略一致。我们对不严谨的机器情报的分析表明诊断决定的必要性。