Clinicians are often very sceptical about applying automatic image processing approaches, especially deep learning based methods, in practice. One main reason for this is the black-box nature of these approaches and the inherent problem of missing insights of the automatically derived decisions. In order to increase trust in these methods, this paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas which influence the decision of the algorithm most. Moreover, this research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models and generate visual interpretations and explanations for clinicians to corroborate their clinical findings. In addition, this will aid in gaining confidence in such methods. The framework builds on existing interpretability and explainability techniques that are currently focusing on classification models, extending them to segmentation tasks. In addition, these methods have been adapted to 3D models for volumetric analysis. The proposed framework provides methods to quantitatively compare visual explanations using infidelity and sensitivity metrics. This framework can be used by data scientists to perform post-hoc interpretations and explanations of their models, develop more explainable tools and present the findings to clinicians to increase their faith in such models. The proposed framework was evaluated based on a use case scenario of vessel segmentation models trained on Time-of-fight (TOF) Magnetic Resonance Angiogram (MRA) images of the human brain. Quantitative and qualitative results of a comparative study of different models and interpretability methods are presented. Furthermore, this paper provides an extensive overview of several existing interpretability and explainability methods.
翻译:临床医生往往对应用自动图像处理方法,特别是深层学习方法非常怀疑,其主要原因是这些方法的黑箱性质,以及缺乏对自动衍生决定的洞察力的内在问题。为了增强对这些方法的信任,本文件介绍了有助于解释和解释深层学习算法结果的方法,描述了影响算法决定的最主要解剖领域。此外,这一研究提供了一个统一框架,即TorchEsegeta,应用各种解释和解释技术,用于深层学习模型,为临床医生提供直观解释和解释,以证实临床发现。此外,这将有助于获得对这种方法的信心。为了增强对这些方法的信任,该框架以现有的可解释性和可解释性技术为基础,目前侧重于分类模型,将其扩大到分解任务。此外,这些方法已经适应了3D模型,用于量测分析。拟议框架提供了使用不忠实和敏感度度度度模型对视觉解释进行定量比较的方法。这一框架可以被数据科学家用来对其模型进行事后解释和解释。比较性解释和解释;此外,比较性解释性解释有助于获得这类方法; 比较性解释性框架,为临床研究人员提供了一种基于目前所了解的模型的定量分析结果的方法。