The ability to explain decisions to its end-users is a necessity to deploy AI as critical decision support. Yet making AI explainable to end-users is a relatively ignored and challenging problem. To bridge the gap, we first identified twelve end-user-friendly explanatory forms that do not require technical knowledge to comprehend, including feature-, example-, and rule-based explanations. We then instantiated the explanatory forms as prototyping cards in four AI-assisted critical decision-making tasks, and conducted a user study to co-design low-fidelity prototypes with 32 layperson participants. The results verified the relevance of using the explanatory forms as building blocks of explanations, and identified their proprieties (pros, cons, applicable explainability needs, and design implications). The explanatory forms, their proprieties, and prototyping support constitute the End-User-Centered explainable AI framework EUCA. It serves as a practical prototyping toolkit for HCI/AI practitioners and researchers to build end-user-centered explainable AI. The EUCA framework is available at http://weina.me/end-user-xai
翻译:向终端用户解释决定的能力是使用AI作为关键决策支持的必要手段。但让AI向终端用户解释AI是一个相对忽视和具有挑战性的问题。为了弥合这一差距,我们首先确定了12个不需要技术知识来理解的最终用户方便的解释性表格,包括特性、实例和基于规则的解释。然后,我们将解释性表格作为四个由AI协助的关键决策任务中的原型卡,并进行了用户研究,共同设计有32名非专业参与者的低忠诚原型。结果核实了使用解释性表格作为解释性构件的相关性,并确定了其所有权(Pros、cons、可适用的解释性需要和设计影响)。解释性表格、其所有者和原型支持构成最终用户可直接解释的AI框架EUCA。它作为HCI/AI从业人员和研究人员建立终端用户可解释性AI的一个实用的原型工具包。欧盟控制框架可在http://weina.me/end-user-xai上查阅。