Methods for model explainability have become increasingly critical for testing the fairness and soundness of deep learning. A number of explainability techniques have been developed which use a set of examples to represent a human-interpretable concept in a model's activations. In this work we show that these explainability methods can suffer the same vulnerability to adversarial attacks as the models they are meant to analyze. We demonstrate this phenomenon on two well-known concept-based approaches to the explainability of deep learning models: TCAV and faceted feature visualization. We show that by carefully perturbing the examples of the concept that is being investigated, we can radically change the output of the interpretability method, e.g. showing that stripes are not an important factor in identifying images of a zebra. Our work highlights the fact that in safety-critical applications, there is need for security around not only the machine learning pipeline but also the model interpretation process.
翻译:模型解释方法对于测试深层学习的公平和合理性越来越重要,已经开发了一些解释技术,这些技术使用一系列例子来代表模型启动过程中的人类解释概念。在这项工作中,我们表明这些解释方法可能同它们要分析的模型一样容易受到对抗性攻击的伤害。我们用两种众所周知的基于概念的办法来证明深层学习模型的解释性:TCAV和面部特征可视化。我们通过仔细推敲正在调查的概念实例,表明我们能够从根本上改变可解释方法的输出,例如显示条形在识别斑马图像方面不是重要因素。我们的工作强调,在安全关键应用中,不仅需要围绕机器学习管道的安全,还需要围绕模型解释过程的安全。