XAI refers to the techniques and methods for building AI applications which assist end users to interpret output and predictions of AI models. Black box AI applications in high-stakes decision-making situations, such as medical domain have increased the demand for transparency and explainability since wrong predictions may have severe consequences. Model explainability and interpretability are vital successful deployment of AI models in healthcare practices. AI applications' underlying reasoning needs to be transparent to clinicians in order to gain their trust. This paper presents a systematic review of XAI aspects and challenges in the healthcare domain. The primary goals of this study are to review various XAI methods, their challenges, and related machine learning models in healthcare. The methods are discussed under six categories: Features-oriented methods, global methods, concept models, surrogate models, local pixel-based methods, and human-centric methods. Most importantly, the paper explores XAI role in healthcare problems to clarify its necessity in safety-critical applications. The paper intends to establish a comprehensive understanding of XAI-related applications in the healthcare field by reviewing the related experimental results. To facilitate future research for filling research gaps, the importance of XAI models from different viewpoints and their limitations are investigated.
翻译:XAI(可解释性人工智能)指构建 AI 应用程序的技术和方法,以帮助最终用户解释 AI 模型的输出和预测。高风险决策场景(如医疗领域)中的黑匣子 AI 应用程序增加了对透明度和可解释性的需求,因为错误的预测可能会造成严重后果。模型的解释能力和可解释性对于在医疗实践中成功部署 AI 模型至关重要。AI 应用程序的基本推理需要对临床医生透明,以获得他们的信任。本文系统综述了医疗保健领域中 XAI 方面的挑战和方法。本研究的主要目标是审查在医疗保健领域中各种 XAI 方法、它们的挑战以及相关的机器学习模型。方法按照以下六个类别进行讨论:基于特征的方法、全局方法、概念模型、替代模型、基于本地像素的方法和以人为中心的方法。最重要的是,本文探讨了 XAI 在医疗问题中的作用,以澄清在安全关键应用中的必要性。本文旨在通过审查相关的实验结果,建立对医疗领域中 XAI 相关应用的全面理解。为填补研究空白,探讨了不同观点下 XAI 模型的重要性和限制,以促进未来的研究。