Very few eXplainable AI (XAI) studies consider how users understanding of explanations might change depending on whether they know more or less about the to be explained domain (i.e., whether they differ in their expertise). Yet, expertise is a critical facet of most high stakes, human decision making (e.g., understanding how a trainee doctor differs from an experienced consultant). Accordingly, this paper reports a novel, user study (N=96) on how peoples expertise in a domain affects their understanding of post-hoc explanations by example for a deep-learning, black box classifier. The results show that peoples understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions (e.g., response times, perceptions of correctness and helpfulness), when the image-based domain considered is familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada MNIST). The wider implications of these new findings for XAI strategies are discussed.
翻译:很少的可移植的AI(XAI)研究考虑用户对解释的理解会如何改变,取决于他们是否多少了解了要解释的领域(即他们的专门知识是否不同),然而,专门知识是最重要的利害关系、人类决策(例如,了解实习医生与有经验的顾问有何不同)的关键方面,因此,本文报告了一份新颖的用户研究报告(N=96),说明一个领域的人民专门知识如何影响他们对热后解释的理解,例如为深层学习者、黑盒分类者所理解。结果显示,人们理解正确和不正确的分类的解释,在几个方面(例如,反应时间、正确性和有用性的看法)发生巨大变化,因为所考虑的基于图像的领域(即,MNIST)与不熟悉的领域(即,Kannada MNIST)不同,因此讨论了这些新发现对XAI战略的更广泛影响。