Artificial intelligence (AI) models are increasingly finding applications in the field of medicine. Concerns have been raised about the explainability of the decisions that are made by these AI models. In this article, we give a systematic analysis of explainable artificial intelligence (XAI), with a primary focus on models that are currently being used in the field of healthcare. The literature search is conducted following the preferred reporting items for systematic reviews and meta-analyses (PRISMA) standards for relevant work published from 1 January 2012 to 02 February 2022. The review analyzes the prevailing trends in XAI and lays out the major directions in which research is headed. We investigate the why, how, and when of the uses of these XAI models and their implications. We present a comprehensive examination of XAI methodologies as well as an explanation of how a trustworthy AI can be derived from describing AI models for healthcare fields. The discussion of this work will contribute to the formalization of the XAI field.
翻译:人工智能模型在医学领域中的应用越来越广泛。然而,这些模型所做的决策的可解释性引起了人们的关注。本文对可解释人工智能(XAI)进行了系统分析,主要关注当前在医疗保健领域中使用的模型。本综述按照有关系统评价和Meta分析的首选报告项目(PRISMA)标准,查找并分析了2012年1月1日至2022年2月2日期间的相关文章。该综述分析了XAI的流行趋势,以及研究的主要方向。我们调查了这些XAI模型的使用原因、方式和时间,以及其影响。我们提供了XAI方法的全面说明,以及如何从描述针对医疗保健领域的AI模型中得出可信的AI。本文的讨论将有助于形成XAI领域的规范。