A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic. While many recognize the necessity to incorporate explainability features in AI systems, how to address real-world user needs for understanding AI remains an open question. By interviewing 20 UX and design practitioners working on various AI products, we seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products. To do so, we develop an algorithm-informed XAI question bank in which user needs for explainability are represented as prototypical questions users might ask about the AI, and use it as a study probe. Our work contributes insights into the design space of XAI, informs efforts to support design practices in this space, and identifies opportunities for future XAI work. We also provide an extended XAI question bank and discuss how it can be used for creating user-centered XAI.
翻译:对可解释的AI(XAI)的兴趣激增,导致大量关于这一专题的算法工作。虽然许多人承认有必要将可解释性特点纳入AI系统,但如何满足现实世界用户对AI的理解需求仍然是一个未决问题。我们通过采访20 UX和设计从事各种AI产品的从业人员,力求找出目前XAI算法工作与创建可解释的AI产品的做法之间的差距。为此,我们开发了一个算法知情的XAI问题库,用户对解释性的需求在其中可以被作为原型问题用户询问AI,并用作研究探测器。我们的工作有助于深入了解XAI的设计空间,为支持这一空间的设计做法提供信息,并确定今后XAI工作的机会。我们还提供了一个扩展的 XAI问题库,并讨论如何将其用于创建以用户为中心的XAI。