IMPORTANCE: An interpretable machine learning model can provide faithful explanations of each prediction and yet maintain higher performance than its black box counterpart. OBJECTIVE: To design an interpretable machine learning model which accurately predicts EEG protopatterns while providing an explanation of its predictions with assistance of a specialized GUI. To map the cEEG latent features to a 2D space in order to visualize the ictal-interictal-injury continuum and gain insight into its high-dimensional structure. DESIGN, SETTING, AND PARTICIPANTS: 50,697 50-second cEEG samples from 2,711 ICU patients collected between July 2006 and March 2020 at Massachusetts General Hospital. Samples were labeled as one of 6 EEG activities by domain experts, with 124 different experts providing annotations. MAIN OUTCOMES AND MEASURES: Our neural network is interpretable because it uses case-based reasoning: it compares a new EEG reading to a set of learned prototypical EEG samples from the training dataset. Interpretability was measured with task-specific neighborhood agreement statistics. Discriminatory performance was evaluated with AUROC and AUPRC. RESULTS: The model achieves AUROCs of 0.87, 0.93, 0.96, 0.92, 0.93, 0.80 for classes Seizure, LPD, GPD, LRDA, GRDA, Other respectively. This performance is statistically significantly higher than that of the corresponding uninterpretable (black box) model with p<0.0001. Videos of the ictal-interictal-injury continuum are provided. CONCLUSION AND RELEVANCE: Our interpretable model and GUI can act as a reference for practitioners who work with cEEG patterns. We can now better understand the relationships between different types of cEEG patterns. In the future, this system may allow for targeted intervention and training in clinical settings. It could also be used for re-confirming or providing additional information for diagnostics.
翻译:意义: 一个可解释的机器学习模型可以提供对每种预测的忠实解释,并且保持比黑盒对应方更高的性能。 目标: 设计一个可解释的机器学习模型, 准确预测EEEG 树顶塔, 并在一个专业的图形界面的协助下解释预测。 将cEEG潜伏特征映射到一个 2D空间, 以便直观地将脑间伤害连续体加以视觉化, 并洞察到它的高维结构。 DEGIG、 SETtinging、 和参与者: 50,697 50秒的 eEEEEG样本, 从2006年7月至2020年3月在麻省总医院收集的2,711个ICU内心室病人。 将样本标为EEEEEG的六种EG模式之一, 由124个不同的专家提供说明。 MAIN OU和MERC 的神经网络可以被解释, 因为它使用基于案例推理推理推理: 将新的EGEGO 样本阅读到一套从培训数据库中学习的原型的EGEGEGEGEG 参考样本。 。 。 。 数据库中, 用非特定的模型测量测量测量测量测量测量测量测量测量测量测量测量测量数据,,,,, 提供了一种非特定ALIRC,, 提供一种不精确度, AS 。