Nowadays Artificial Intelligence (AI) has become a fundamental component of healthcare applications, both clinical and remote, but the best performing AI systems are often too complex to be self-explaining. Explainable AI (XAI) techniques are defined to unveil the reasoning behind the system's predictions and decisions, and they become even more critical when dealing with sensitive and personal health data. It is worth noting that XAI has not gathered the same attention across different research areas and data types, especially in healthcare. In particular, many clinical and remote health applications are based on tabular and time series data, respectively, and XAI is not commonly analysed on these data types, while computer vision and Natural Language Processing (NLP) are the reference applications. To provide an overview of XAI methods that are most suitable for tabular and time series data in the healthcare domain, this paper provides a review of the literature in the last 5 years, illustrating the type of generated explanations and the efforts provided to evaluate their relevance and quality. Specifically, we identify clinical validation, consistency assessment, objective and standardised quality evaluation, and human-centered quality assessment as key features to ensure effective explanations for the end users. Finally, we highlight the main research challenges in the field as well as the limitations of existing XAI methods.
翻译:现今人工智能(AI)已成为临床和远程保健应用的基本组成部分,但运作最佳的人工智能系统往往过于复杂,无法自我解释; 确定可解释的AI(XAI)技术是为了揭示系统预测和决定背后的推理,在处理敏感和个人健康数据时,这些技术变得更加重要; 值得注意的是,XAI在不同研究领域和数据类型,特别是在医疗保健方面,没有获得同样的关注; 特别是,许多临床和远程保健应用分别以表格和时间序列数据为基础,而XAI通常没有根据这些数据类型进行分析,而计算机视觉和自然语言处理(NLP)则是参考应用程序; 为了概述该系统的预测和决定背后的推理,这些技术对于保健领域的表格和时间序列数据最为合适,本文件概述了XAI方法,介绍了过去5年的文献,说明了所产生解释的类型和为评价其相关性和质量所作的努力; 具体地说,我们确定临床鉴定、一致性评估、客观和标准化的质量评估以及人本质量评估是确保最终用户有效解释困难的关键特征。 最后,我们强调X领域的主要方法。