In recent years, the field of explainable AI (XAI) has produced a vast collection of algorithms, providing a useful toolbox for researchers and practitioners to build XAI applications. With the rich application opportunities, explainability is believed to have moved beyond a demand by data scientists or researchers to comprehend the models they develop, to an essential requirement for people to trust and adopt AI deployed in numerous domains. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are becoming increasingly important. In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, and provide conceptual and methodological tools for XAI. We ask the question "what are human-centered approaches doing for XAI" and highlight three roles that they play in shaping XAI technologies by helping navigate, assess and expand the XAI toolbox: to drive technical choices by users' explainability needs, to uncover pitfalls of existing XAI methods and inform new methods, and to provide conceptual frameworks for human-compatible XAI.
翻译:近年来,可解释的AI(XAI)领域产生了大量的算法,为研究人员和从业人员提供了建立XAI应用应用的有用工具箱。随着应用机会的丰富,人们认为可解释性已经超越了数据科学家或研究人员对数据科学家或研究人员了解所开发模型的要求,而超出了数据科学家或研究人员对了解他们所开发模型的要求,成为人们信任和采用在许多领域部署的AI的基本要求,然而,可解释性是一个内在的以人为中心的特性,该领域正开始采用以人为中心的方法。这一领域的人类-计算机互动(HCI)研究和用户经验(UX)设计正在变得日益重要。在本章中,我们首先对XAI算法的技术景观进行高层次的概述,然后有选择性地调查我们自己和最近HCI的其他工作,这些工作采取以人为中心的方法设计、评价、为XAI设计、提供概念和方法工具。我们问“XAI为XAI采取什么以人为中心的方法”的问题,并着重指出他们通过帮助导航、评估和扩展XAI技术在塑造XAI技术方面扮演的三大角色:推动用户的技术选择需求,向X号解释X号解释X的模型框架和X的X基准框架提供X的X号、X的X号、X的X信的X的X的X法和X法和X的建立方法的X的建立,以及X的X的X的建立和X的模型框架框架框架,以及X的X的X的X的X的X的X的X的建立,以及新的方法,提供新的方法和X的X的X的X的模型的模型的模型的模型的模型和A的模型的模型和工具框架。