Non-technical end-users are silent and invisible users of the state-of-the-art explainable artificial intelligence (XAI) technologies. Their demands and requirements for AI explainability are not incorporated into the design and evaluation of XAI techniques, which are developed to explain the rationales of AI decisions to end-users and assist their critical decisions. This makes XAI techniques ineffective or even harmful in high-stakes applications, such as healthcare, criminal justice, finance, and autonomous driving systems. To systematically understand end-users' requirements to support the technical development of XAI, we conducted the EUCA user study with 32 layperson participants in four AI-assisted critical tasks. The study identified comprehensive user requirements for feature-, example-, and rule-based XAI techniques (manifested by the end-user-friendly explanation forms) and XAI evaluation objectives (manifested by the explanation goals), which were shown to be helpful to directly inspire the proposal of new XAI algorithms and evaluation metrics. The EUCA study findings, the identified explanation forms and goals for technical specification, and the EUCA study dataset support the design and evaluation of end-user-centered XAI techniques for accessible, safe, and accountable AI.
翻译:非技术最终用户是最先进的可解释人工智能(XAI)技术的沉默和无形用户,他们对AI解释的要求和要求没有被纳入XAI技术的设计和评价,而XAI技术的开发是为了解释AI决定对最终用户的理由并协助其作出关键决定。这使得XAI技术在保健、刑事司法、金融和自主驾驶系统等高风险应用中无效甚至有害。为了系统了解最终用户的需求,以支持XAI的技术开发,我们进行了ECA用户研究,有32名外行参与者参加了4项AI协助的关键任务。研究确定了基于规则的特性、实例和XAI技术(由最终用户友好解释表制定)和XAI评价目标(由解释目标确定)的综合用户要求,显示这些目标有助于直接激励提出新的XAI算法和评价指标。EOCA研究结果、技术规格确定的解释格式和目标,以及ECCA研究数据集支持设计和评价最终用户、安全、核心和可问责的AI-X技术。