This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation models, when these are pragmatically embedded in the clinical context. Despite the general positive attitude of healthcare professionals (HCPs) towards explanations as a safety and trust mechanism, for a significant set of participants there were negative effects associated with confirmation bias, accentuating model over-reliance and increased effort to interact with the model. Also, contradicting one of its main intended functions, standard explanatory models showed limited ability to support a critical understanding of the limitations of the model. However, we found new significant positive effects which repositions the role of explanations within a clinical context: these include reduction of automation bias, addressing ambiguous clinical cases (cases where HCPs were not certain about their decision) and support of less experienced HCPs in the acquisition of new domain knowledge.
翻译:本文为临床决策支持的可解释机器学习模式提供了一个务实的评价框架,研究报告揭示了在临床背景下实际嵌入的ML解释模式中,ML解释模式的作用更加细微,尽管保健专业人员对解释作为一种安全和信任机制普遍持积极态度,但大量参与者都对解释持积极态度,与确认偏差、强调模式过度依赖和加大与模式互动的努力有关的负面影响。此外,标准解释模型与其主要预期功能之一相矛盾,表明支持批判性理解模型局限性的能力有限。然而,我们发现在临床背景下重新定位解释作用的新的重大积极影响:包括减少自动化偏差、解决模棱两可的临床案例(HCP对其决定并不确定的情况)和支持在获取新领域知识方面经验较少的HCP。