Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.
翻译:尽管可解释的AI(XAI)方法激增,但对于最终用户的解释需要和围绕XAI解释的行为很少理解。为了弥补这一差距,帮助理解解释性如何支持人类-AI互动,我们与20个现实世界AI应用软件(Merlin鸟类识别应用软件)的最终用户进行了混合方法研究,并询问他们的XAI需要、用途和看法。我们发现参与者希望获得实际有用的信息,可以改善他们与AI的合作,比技术系统的细节要多。与此相关的是,与会者打算将XAI的解释用于各种目的,而不能理解AI的产出:校准信任,提高任务技能,改变他们的行为,为AI提供更好的投入,向开发者提供建设性的反馈。最后,在现有的XAI方法中,与会者倾向于采用与人类推理和解释相似的基于部分的解释。我们讨论了我们的调查结果的影响,并为未来的XAI设计提出建议。