Advances in AI technologies have resulted in superior levels of AI-based model performance. However, this has also led to a greater degree of model complexity, resulting in 'black box' models. In response to the AI black box problem, the field of explainable AI (xAI) has emerged with the aim of providing explanations catered to human understanding, trust, and transparency. Yet, we still have a limited understanding of how xAI addresses the need for explainable AI in the context of healthcare. Our research explores the differing explanation needs amongst stakeholders during the development of an AI-system for classifying COVID-19 patients for the ICU. We demonstrate that there is a constellation of stakeholders who have different explanation needs, not just the 'user'. Further, the findings demonstrate how the need for xAI emerges through concerns associated with specific stakeholder groups i.e., the development team, subject matter experts, decision makers, and the audience. Our findings contribute to the expansion of xAI by highlighting that different stakeholders have different explanation needs. From a practical perspective, the study provides insights on how AI systems can be adjusted to support different stakeholders needs, ensuring better implementation and operation in a healthcare context.
翻译:AI技术的进步导致AI模型性能的较高水平,然而,这也导致模型复杂性的更大程度,产生了“黑盒”模型。为了应对AI黑盒问题,出现了可解释的AI(xAI)领域,目的是提供符合人类理解、信任和透明度的解释。然而,对于xAI如何解决在卫生保健方面可解释的AI的需要,我们的理解仍然有限。我们的研究探讨了在为ICU对COVID-19病人进行分类的AI系统开发过程中,利益攸关方之间不同的解释需求。我们表明,存在一系列利益攸关方,他们有不同的解释需求,而不仅仅是“用户”。此外,研究结果表明,由于与特定利益攸关方群体,即发展团队、主题专家、决策者和受众相关的关切,xAI的需要如何出现。我们的调查结果通过强调不同的利益攸关方有不同的解释需求,有助于xAI的扩展。从实际角度看,研究提供了如何调整AI系统以支持不同的利益攸关方的需求,确保更好的实施和运行。