As a technical sub-field of artificial intelligence (AI), explainable AI (XAI) has produced a vast collection of algorithms, providing a toolbox for researchers and practitioners to build XAI applications. With the rich application opportunities, explainability has moved beyond a demand by data scientists or researchers to comprehend the models they are developing, to become an essential requirement for people to trust and adopt AI deployed in numerous domains. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are becoming increasingly important. In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, provide conceptual and methodological tools for XAI. We ask the question "\textit{what are human-centered approaches doing for XAI}" and highlight three roles that they play in shaping XAI technologies by helping navigate, assess and expand the XAI toolbox: to drive technical choices by users' explainability needs, to uncover pitfalls of existing XAI methods and inform new methods, and to provide conceptual frameworks for human-compatible XAI.
翻译:作为人工智能的技术子领域(AI),可解释的AI(XAI)生成了大量的算法,为研究人员和从业人员提供了建立XAI应用程序的工具箱。随着应用机会的丰富,可解释性已经超越了数据科学家或研究人员对了解他们正在开发的模式的要求,成为人们信任和采用在众多领域部署的AI的基本要求。然而,可解释性是一个内在的以人为中心的特性,该领域开始采用以人为本的方法。人类-计算机互动(HCI)研究和用户经验(UX)在这一领域的设计正在变得日益重要。在本章中,我们首先从高级概览开始,审视XAI算法的技术背景,然后有选择地调查我们自己和其他最近HCI的以人为中心的设计、评价、提供概念和方法工具的工作。我们问“Textit{什么是以人为中心的方法,为XAI}”问题,并强调他们通过帮助导航、评估和扩展XAI工具箱在塑造XAI技术方面所发挥的三个作用:推动用户的技术选择,为XAI概念框架提供新的理解方法的可理解性,为XAI提供新的理解性框架,为X风险提供新的方法。