The remarkable success of deep learning has prompted interest in its application to medical diagnosis. Even tough state-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data, these models are hardly adopted in clinical workflows, mainly due to their lack of interpretability. The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models, leading to the creation of the topic of eXplainable Artificial Intelligence (XAI). In this context, we provide a thorough survey of XAI applied to medical diagnosis, including visual, textual, and example-based explanation methods. Moreover, this work reviews the existing medical imaging datasets and the existing metrics for evaluating the quality of the explanations . Complementary to most existing surveys, we include a performance comparison among a set of report generation-based methods. Finally, the major challenges in applying XAI to medical imaging are also discussed.
翻译:深层学习的出色成功激发了人们对医疗诊断应用的兴趣,即使最先进的深层学习模式在对不同类型的医疗数据进行分类方面达到了人类水平的准确性,这些模式也很少在临床工作流程中被采纳,主要原因是缺乏解释性;深层学习模式的黑箱性质使得有必要制定战略解释这些模式的决策过程,从而导致创建了可移植人工智能(XAI)专题。在这方面,我们提供了对医疗诊断应用的XAI的彻底调查,包括视觉、文字和以实例为基础的解释方法。此外,这项工作还审查了现有的医学成像数据集和用于评估解释质量的现有指标。作为对大多数现有调查的补充,我们还包括了一套基于报告生成方法的绩效比较。最后,还讨论了将XAI应用于医学成像的主要挑战。