Recent years have seen a surge of interest in the field of explainable AI (XAI), with a plethora of algorithms proposed in the literature. However, a lack of consensus on how to evaluate XAI hinders the advancement of the field. We highlight that XAI is not a monolithic set of technologies -- researchers and practitioners have begun to leverage XAI algorithms to build XAI systems that serve different usage contexts, such as model debugging and decision-support. Algorithmic research of XAI, however, often does not account for these diverse downstream usage contexts, resulting in limited effectiveness or even unintended consequences for actual users, as well as difficulties for practitioners to make technical choices. We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts. Towards this goal, we introduce a perspective of contextualized XAI evaluation by considering the relative importance of XAI evaluation criteria for prototypical usage contexts of XAI. To explore the context-dependency of XAI evaluation criteria, we conduct two survey studies, one with XAI topical experts and another with crowd workers. Our results urge for responsible AI research with usage-informed evaluation practices, and provide a nuanced understanding of user requirements for XAI in different usage contexts.
翻译:近年来,人们对可解释的AI(XAI)领域的兴趣激增,文献中提出了大量的算法,然而,在如何评价XAI方面缺乏共识,妨碍了这一领域的进展。我们强调,XAI并不是一套单一的技术 -- -- 研究人员和从业者已开始利用XAI算法来建立符合不同使用背景的XAI系统,例如模式调试和决策支持。XAI的算法研究往往没有考虑到这些不同的下游使用环境,导致实际用户的效力有限,甚至无意中的后果,以及从业人员在作出技术选择方面的困难。我们认为,缩小差距的一个办法是制定评价方法,说明这些使用环境中不同用户的需求。为了实现这一目标,我们提出一种背景化的XAI评价观点,考虑到XAI评估标准对于XAI的典型使用环境的相对重要性。探索XAI评价标准的背景依赖性,我们进行两次调查研究,一次与XAI的专题专家进行,另一次与人群工人进行技术选择。我们提出,缩小差距的一个办法是制定评估方法,说明在这些使用环境中不同用户的不同要求。我们的结果要求以负责任的AI使用方式提供一种了解。