Modern deep learning models are notoriously opaque, which has motivated the development of methods for interpreting how deep models predict. This goal is usually approached with attribution method, which assesses the influence of features on model predictions. As an explanation method, the evaluation criteria of attribution methods is how accurately it re-reflects the actual reasoning process of the model (faithfulness). Meanwhile, since the reasoning process of deep models is inaccessible, researchers design various evaluation methods to demonstrate their arguments. However, some crucial logic traps in these evaluation methods are ignored in most works, causing inaccurate evaluation and unfair comparison. This paper systematically reviews existing methods for evaluating attribution scores and summarizes the logic traps in these methods. We further conduct experiments to demonstrate the existence of each logic trap. Through both the theoretical and experimental analysis, we hope to increase attention on the inaccurate evaluation of attribution scores. Moreover, with this paper, we suggest stopping focusing on improving performance under unreliable evaluation systems and starting efforts on reducing the impact of proposed logic traps
翻译:现代深层次的学习模式臭名昭著地不透明,这促使人们制定解释深层次模型预测的方法。这一目标通常采用归属方法,评估模型预测特征的影响。作为一种解释方法,归属方法的评价标准是该模型实际推理过程(信仰)的准确性。与此同时,由于深层次模型的推理过程无法进入,研究人员设计了各种评价方法以证明其论点。然而,这些评价方法中的一些关键的逻辑陷阱在大多数工作中被忽视,造成不准确的评价和不公平的比较。本文系统地审查了现有的归因分评分评估方法,并概述了这些方法中的逻辑陷阱。我们进一步进行实验,以证明每一种逻辑陷阱的存在。我们希望通过理论和实验分析,增加对不准确的归因评分评分的注意。此外,我们建议,在这份文件中不再注重在不可靠的评价系统下改进业绩,而开始努力减少拟议的逻辑陷阱的影响。</s>