Explainable AI (XAI) methods provide explanations of AI models, but our understanding of how they compare with human explanations remains limited. In image classification, we found that humans adopted more explorative attention strategies for explanation than the classification task itself. Two representative explanation strategies were identified through clustering: One involved focused visual scanning on foreground objects with more conceptual explanations diagnostic for inferring class labels, whereas the other involved explorative scanning with more visual explanations rated higher for effectiveness. Interestingly, XAI saliency-map explanations had the highest similarity to the explorative attention strategy in humans, and explanations highlighting discriminative features from invoking observable causality through perturbation had higher similarity to human strategies than those highlighting internal features associated with higher class score. Thus, humans differ in information and strategy use for explanations, and XAI methods that highlight features informing observable causality match better with human explanations, potentially more accessible to users.
翻译:----
在可解释 AI 中,方法提供了 AI 模型的解释,但我们对其与人类解释相比的理解仍然很有限。在图像分类中,我们发现人类采用了更多的探索性注意力策略以解释,而非分类任务本身。通过聚类法,我们确定了两种代表性的解释策略:一种是集中在前景对象上的视觉扫描,具有更概念性的解释,有助于推断类别标签;另一种是探索性的扫描,具有更高有效性的视觉解释。有趣的是,可解释 AI 中突出显示显著性图的解释方法与人类探索性注意力策略的相似性最高。突出显示调用可观察因果关系的区别特征的解释方法与人类策略的相似性较高,这些解释与排名更高的类分数相关的内部特征相比高。因此,人类在信息和解释策略使用上存在差异,突出显示了可观察因果关系的可解释 AI 方法与人类解释更匹配,可能更适合用户使用。