Despite the myriad peer-reviewed papers demonstrating novel Artificial Intelligence (AI)-based solutions to COVID-19 challenges during the pandemic, few have made significant clinical impact. The impact of artificial intelligence during the COVID-19 pandemic was greatly limited by lack of model transparency. This systematic review examines the use of Explainable Artificial Intelligence (XAI) during the pandemic and how its use could overcome barriers to real-world success. We find that successful use of XAI can improve model performance, instill trust in the end-user, and provide the value needed to affect user decision-making. We introduce the reader to common XAI techniques, their utility, and specific examples of their application. Evaluation of XAI results is also discussed as an important step to maximize the value of AI-based clinical decision support systems. We illustrate the classical, modern, and potential future trends of XAI to elucidate the evolution of novel XAI techniques. Finally, we provide a checklist of suggestions during the experimental design process supported by recent publications. Common challenges during the implementation of AI solutions are also addressed with specific examples of potential solutions. We hope this review may serve as a guide to improve the clinical impact of future AI-based solutions.
翻译:尽管经过同行审查的众多文件展示了在这一大流行病期间以人工智能(AI)为基础应对COVID-19挑战的新颖办法,但很少有人对临床产生重大影响。在COVID-19大流行期间,人工智能的影响因缺乏模式透明度而大受限制。这种系统审查审视了该大流行病期间可解释人工智能(XAI)的使用情况,以及其使用如何克服现实世界成功的障碍。我们发现,成功使用XAI可以改进模型性能,保持对终端用户的信任,并提供影响用户决策所需的价值。我们向读者介绍共同的XAI技术、其效用及其应用的具体例子。对XAI结果的评价也是作为最大限度地发挥AI临床决策支持系统价值的一个重要步骤而加以讨论的。我们举例说明了XAI传统、现代和未来的趋势,以阐明新的XAI技术的演变。最后,我们提供了最近出版物所支持的试验性设计过程中的建议清单。还探讨了实施AI解决方案过程中的共同挑战,并列举了可能的解决办法的具体例子。我们希望,这次审查可作为改进AI基础解决方案临床影响的指南。