Explainable AI (XAI) holds the promise of advancing the implementation and adoption of AI-based tools in practice, especially in high-stakes environments like healthcare. However, most of the current research is disconnected from its practical applications and lacks input of end users. To address this, we conducted semi-structured interviews with clinicians to discuss their thoughts, hopes, and concerns. We find that clinicians generally think positively about developing AI-based tools for clinical practice, but they have concerns about how these will fit into their workflow and how it will impact clinician-patient relations. We further identify education of clinicians on AI as a crucial factor for the success of AI in healthcare and highlight aspects clinicians are looking for in (X)AI-based tools. In contrast to other studies, we take on a holistic and exploratory perspective to identify general requirements, which is necessary before moving on to testing specific (X)AI products for healthcare.
翻译:暂无翻译