Although machine learning (ML) models of AI achieve high performances in medicine, they are not free of errors. Empowering clinicians to identify incorrect model recommendations is crucial for engendering trust in medical AI. Explainable AI (XAI) aims to address this requirement by clarifying AI reasoning to support the end users. Several studies on biomedical imaging achieved promising results recently. Nevertheless, solutions for models using tabular data are not sufficient to meet the requirements of clinicians yet. This paper proposes a methodology to support clinicians in identifying failures of ML models trained with tabular data. We built our methodology on three main pillars: decomposing the feature set by leveraging clinical context latent space, assessing the clinical association of global explanations, and Latent Space Similarity (LSS) based local explanations. We demonstrated our methodology on ML-based recognition of preterm infant morbidities caused by infection. The risk of mortality, lifelong disability, and antibiotic resistance due to model failures was an open research question in this domain. We achieved to identify misclassification cases of two models with our approach. By contextualizing local explanations, our solution provides clinicians with actionable insights to support their autonomy for informed final decisions.
翻译:尽管AI的机器学习模式在医学领域取得了很高的成绩,但它们并非没有错误。授权临床医生确定不正确的示范建议对于培养对医学AI的信任至关重要。可以解释的AI(XAI)旨在通过澄清AI的推理来满足这一要求,支持最终用户。关于生物医学成像的几项研究最近取得了可喜的成果。然而,使用表格数据模型的解决方案还不足以满足临床医生的要求。本文件提出了一个方法,支持临床医生查明用表格数据培训的ML模型的失败。我们根据三个主要支柱构建了我们的方法:通过利用临床环境潜在空间、评估全球解释的临床联系和基于当地解释的冷点空间相似性(LSS)来分解所设定的特征。我们展示了我们基于ML确认感染引起的早期婴儿发病率的方法。由于模型失败而导致的死亡、终身残疾和抗生素抗药性风险是这一领域的一个公开研究问题。我们通过将两种模型的分类与我们的方法结合起来,通过将当地解释背景化,我们的解决办法为临床医生提供了可操作的洞察力,支持其最终决定的自主性。