Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term "negotiation." These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.
翻译:医疗领域的人工智能(AI)有可能改善患者的治疗结果,但临床的接受仍然是一个关键障碍。 我们开发了一种新的决策支持界面,为败血症提供可解释的治疗建议。 败血症是一种威胁生命的条件,在这种条件下,决定的不确定性是常见的,治疗做法大相径庭,即使作出最佳决定,结果也可能发生不良。 该系统构成了混合方法研究的基础,在这种研究中,24名强化护理临床医生对真正的病人案例作出由人工护理协助的决定。 我们发现,解释普遍提高了对AI的信心,但与具体建议的一致程度超出了先前工作中描述的二进制接受或拒绝范围。 尽管临床医生有时忽视或信任AI,但他们也常常将建议的某些方面放在优先位置,以便遵循、拒绝或推迟我们称之为“谈判 ” 。 这些结果揭示了采用以治疗为重点的人工智能工具的新障碍,并提出了更好地支持不同临床观点的方法。