As a technical sub-field of artificial intelligence (AI), explainable AI (XAI) has produced a vast collection of algorithms, providing a toolbox for researchers and practitioners to build XAI applications. With the rich application opportunities, explainability has moved beyond a demand by data scientists or researchers to comprehend the models they develop, to become an essential requirement for people to trust and adopt AI deployed in numerous domains. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human computer interaction (HCI) research and user experience (UX) design in this area are becoming increasingly important. In this chapter, we begin with a high-level overview of the technical landscape of XAI algorithms, then selectively survey our own and other recent HCI works that take human-centered approaches to design, evaluate, provide conceptual and methodological tools for XAI. We ask the question "what are human-centered approaches doing for XAI" and highlight three roles that they play in shaping XAI technologies by helping navigate, assess and expand the XAI toolbox: to drive technical choices by users' explainability needs, to uncover pitfalls of existing XAI methods and inform new methods, and to provide conceptual frameworks for human-compatible XAI.
翻译:作为人工智能技术的子领域,可解释的AI(XAI)产生了大量的算法,为研究人员和从业人员提供了建立XAI应用软件的工具箱。随着应用机会的丰富,可解释性已经超越了数据科学家或研究人员对了解他们所开发模型的要求,成为人们信任和采用在众多领域部署的AI的基本要求。然而,可解释性是人类固有的以人为中心的特性,该领域开始采用以人为本的方法。人类计算机互动(HCI)研究和用户经验(UX)在这一领域的设计正在变得日益重要。在本章中,我们首先对XAI算法的技术背景进行高层次的概述,然后有选择地调查我们自己和其他最近HCI的工作,以人为中心的方法设计、评价、提供概念和方法工具,为XAI设计、评价、提供概念工具。我们问“XAI以人为中心的方法是什么?”我们问“XAI”问题,并强调他们通过帮助导航、评估和扩展XAI工具箱在塑造XAI技术方面发挥的三项作用:推动用户的技术选择,为XAI的新方法提供可解释性概念框架,为XAI的新方法提供新的风险。