Explainable artificial intelligence has been gaining attention in the past few years. However, most existing methods are based on gradients or intermediate features, which are not directly involved in the decision-making process of the classifier. In this paper, we propose a slot attention-based classifier called SCOUTER for transparent yet accurate classification. Two major differences from other attention-based methods include: (a) SCOUTER's explanation is involved in the final confidence for each category, offering more intuitive interpretation, and (b) all the categories have their corresponding positive or negative explanation, which tells "why the image is of a certain category" or "why the image is not of a certain category." We design a new loss tailored for SCOUTER that controls the model's behavior to switch between positive and negative explanations, as well as the size of explanatory regions. Experimental results show that SCOUTER can give better visual explanations in terms of various metrics while keeping good accuracy on small and medium-sized datasets.
翻译:过去几年来,可解释的人工智能一直受到关注。然而,大多数现有方法都以梯度或中间特征为基础,这些特征并不直接涉及分类者的决策过程。在本文件中,我们建议用一个叫SCOUTER的时空关注分类器来进行透明但准确的分类。与其他关注方法有两大不同之处:(a)SCOUTER的解释涉及对每一类的最终信任,提供了更直观的解释;(b)所有类别都有相应的正或负解释,说明“为什么图像属于某一类别”或“为什么图像不属于某一类别”。我们为SCOUTER设计了一种新的损失,以控制模型的行为,在正面和负面的解释之间转换,以及解释区域的大小。实验结果表明SCOUTER可以以更清晰的视觉的方式解释各种指标,同时保持中小型数据集的准确性。