Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems. These systems are complex and sometimes biased, but they nevertheless make decisions that impact our lives. XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability. These systems are often not tested to determine whether the algorithm helps users accomplish any goals, and so their explainability remains unproven. We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems, and implement algorithms to serve that purpose. In this paper, we review some of the basic concepts that have been used for user-centered XAI systems over the past 40 years of research. Based on these, we describe the "Self-Explanation Scorecard", which can help developers understand how they can empower users by enabling self-explanation. Finally, we present a set of empirically-grounded, user-centered design principles that may guide developers to create successful explainable systems.
翻译:为响应现代AI和ML系统的发展,可解释的人工智能系统(XAI)已经重新出现。这些系统复杂,有时有偏差,但它们仍然会做出影响我们生活的决定。 XAI系统往往以算法为重点;开始和结束的算法可以实施基本的未经测试的解释性概念。这些系统往往没有经过测试以确定算法是否有助于用户实现任何目标,因此其解释性仍然无法证明。我们建议了另一种选择:从设计、测试和实施XAI系统以人为中心的原则开始,并为此目的实施算法。在本文中,我们审查了过去40年研究中以用户为中心的XAI系统使用的一些基本概念。根据这些概念,我们描述“自毁计分数卡”可以帮助开发者了解它们如何通过自我解算来增强用户的能力。最后,我们提出了一套经验基础的用户中心设计原则,可以指导开发者成功创建可解释的系统。