More recently, Explainable Artificial Intelligence (XAI) research has shifted to focus on a more pragmatic or naturalistic account of understanding, that is, whether the stakeholders understand the explanation. This point is especially important for research on evaluation methods for XAI systems. Thus, another direction where XAI research can benefit significantly from cognitive science and psychology research is ways to measure understanding of users, responses and attitudes. These measures can be used to quantify explanation quality and as feedback to the XAI system to improve the explanations. The current report aims to propose suitable metrics for evaluating XAI systems from the perspective of the cognitive states and processes of stakeholders. We elaborate on 7 dimensions, i.e., goodness, satisfaction, user understanding, curiosity & engagement, trust & reliance, controllability & interactivity, and learning curve & productivity, together with the recommended subjective and objective psychological measures. We then provide more details about how we can use the recommended measures to evaluate a visual classification XAI system according to the recommended cognitive metrics.
翻译:最近,可解释的人工智能(XAI)研究已转向注重更务实或自然的认知,即利益攸关方是否理解解释。这一点对于研究XAI系统的评价方法特别重要。因此,XAI研究可以从认知科学和心理学研究中大大受益的另一个方向是衡量用户的理解、反应和态度的方法。这些措施可用来量化解释质量,并作为对XAI系统改进解释的反馈。本报告旨在提出从利益攸关方认知状态和过程的角度评价XAI系统的合适衡量标准。我们阐述了7个层面,即:良好、满意、用户理解、好奇和参与、信任和依赖、可控性和互动性,以及学习曲线和生产力,以及所建议的主观和客观的心理措施。然后,我们更详细地说明我们如何利用建议的措施,根据建议的认知度量评估直观分类XAI系统。我们再进一步说明我们如何利用建议的措施,根据所建议的认知度评估XAI系统。