In intensive care units (ICUs), critically ill patients are monitored with electroencephalograms (EEGs) to prevent serious brain injury. The number of patients who can be monitored is constrained by the availability of trained physicians to read EEGs, and EEG interpretation can be subjective and prone to inter-observer variability. Automated deep learning systems for EEG could reduce human bias and accelerate the diagnostic process. However, black box deep learning models are untrustworthy, difficult to troubleshoot, and lack accountability in real-world applications, leading to a lack of trust and adoption by clinicians. To address these challenges, we propose a novel interpretable deep learning model that not only predicts the presence of harmful brainwave patterns but also provides high-quality case-based explanations of its decisions. Our model performs better than the corresponding black box model, despite being constrained to be interpretable. The learned 2D embedded space provides the first global overview of the structure of ictal-interictal-injury continuum brainwave patterns. The ability to understand how our model arrived at its decisions will not only help clinicians to diagnose and treat harmful brain activities more accurately but also increase their trust and adoption of machine learning models in clinical practice; this could be an integral component of the ICU neurologists' standard workflow.
翻译:在重症监护室(ICU),危急病患受到脑电图(EEG)监测以防止严重脑损伤。可监测的病患数量受到受过训练的医生读取 EEGs 的可用性的限制,并且 EEG 解释可能存在主观性和容易受到观察者变异的影响。自动化 EEG 的深度学习系统可以减少人为偏见并加速诊断过程。然而,黑盒深度学习模型是不可信的,难以排除故障,并且在现实应用中缺乏问责性,导致医生缺乏信任和采用。为了解决这些问题,我们提出了一种新颖的可解释深度学习模型,它不仅预测有害脑波模式的存在,而且提供高质量的基于案例的解释。与相应的黑盒模型相比,我们的模型在可解释性的限制下表现更好。所学的 2D 嵌入式空间提供了 Ictal-Interictal-Injury 连续脑波模式结构的第一个全局概述。了解我们的模型是如何做出决策的能力,不仅将帮助临床医生更准确地诊断和治疗有害的脑活动,而且还能增加他们对临床实践中的机器学习模型的信任和采用;这可能成为 ICU 神经学家标准工作流程的一个重要组成部分。