Recent years have seen a surge of interest in the field of explainable AI (XAI), with a plethora of algorithms proposed in the literature. However, a lack of consensus on how to evaluate XAI hinders the advancement of the field. We highlight that XAI is not a monolithic set of technologies -- researchers and practitioners have begun to leverage XAI algorithms to build XAI systems that serve different usage contexts, such as model debugging and decision-support. Algorithmic research of XAI, however, often does not account for these diverse downstream usage contexts, resulting in limited effectiveness or even unintended consequences for actual users, as well as difficulties for practitioners to make technical choices. We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts. Towards this goal, we introduce a perspective of contextualized XAI evaluation by considering the relative importance of XAI evaluation criteria for prototypical usage contexts of XAI. To explore the context dependency of XAI evaluation criteria, we conduct two survey studies, one with XAI topical experts and another with crowd workers. Our results urge for responsible AI research with usage-informed evaluation practices, and provide a nuanced understanding of user requirements for XAI in different usage contexts.
翻译:近年来,人们对可解释的AI(XAI)领域的兴趣激增,文献中提出了大量的算法;然而,在如何评价XAI方面缺乏共识,妨碍了这一领域的进展;我们强调,XAI并不是一套单一的技术 -- -- 研究人员和从业人员已开始利用XAI算法,为不同的使用环境,如模式调试和决策支持,建立XAI系统;但是,XAI的算法研究往往没有考虑到这些不同的下游使用环境,导致实际用户的效力有限,甚至意外后果,以及从业人员作出技术选择的困难。我们认为,缩小差距的一个办法是制定评价方法,说明这些使用环境中不同用户的需求。为实现这一目标,我们提出了一种背景化的XAI评价观点,其中考虑到XAI评价标准对于XAI的典型使用环境的相对重要性。为探讨XAI评价标准的背景依赖性,我们进行了两次调查研究,一次是XAI专题专家,另一次是人群工人。我们主张,缩小差距的方法之一是制定评价方法,说明这些用户的需求。