Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods. In light of these results, it remains unclear for practitioners how to use these methods and choose between them in a principled way. In this paper, we show that for moderately rich model classes (easily satisfied by neural networks), any feature attribution method that is complete and linear -- for example, Integrated Gradients and SHAP -- can provably fail to improve on random guessing for inferring model behaviour. Our results apply to common end-tasks such as characterizing local model behaviour, identifying spurious features, and algorithmic recourse. One takeaway from our work is the importance of concretely defining end-tasks: once such an end-task is defined, a simple and direct approach of repeated model evaluations can outperform many other complex feature attribution methods.
翻译:尽管有许多可信的解释性方法可以产生合理的解释结果,但该领域也经历了很多这些方法失败的案例。因此,对于从业者来说,如何以一种原则性的方式使用这些方法和在它们之间进行选择仍然不清楚。在本论文中,我们证明了,对于相当丰富的模型类(神经网络容易满足),任何完备且线性的特征归因方法——例如,集成梯度和SHAP——在推断模型行为方面都可能无法改善随机猜测。我们的结果适用于常见的终端任务,例如表征本地模型行为、识别虚假特征和算法补救。我们研究的一个要点是明确定义终端任务的重要性:一旦定义了这样的终端任务,一个反复评估模型的简单和直接方法可以超越许多其他复杂的特征归因方法。