Methods for model explainability have become increasingly critical for testing the fairness and soundness of deep learning. Concept-based interpretability techniques, which use a small set of human-interpretable concept exemplars in order to measure the influence of a concept on a model's internal representation of input, are an important thread in this line of research. In this work we show that these explainability methods can suffer the same vulnerability to adversarial attacks as the models they are meant to analyze. We demonstrate this phenomenon on two well-known concept-based interpretability methods: TCAV and faceted feature visualization. We show that by carefully perturbing the examples of the concept that is being investigated, we can radically change the output of the interpretability method. The attacks that we propose can either induce positive interpretations (polka dots are an important concept for a model when classifying zebras) or negative interpretations (stripes are not an important factor in identifying images of a zebra). Our work highlights the fact that in safety-critical applications, there is need for security around not only the machine learning pipeline but also the model interpretation process.
翻译:模型解释方法对于测试深层学习的公平和合理性越来越重要。基于概念的解释技术,使用少量的人类解释概念示例来测量概念对模型输入的内部表述的影响,是这一研究线的一个重要线索。在这项工作中,我们表明,这些解释方法可能遭受与其要分析的模式一样的对抗性攻击的同样脆弱性。我们用两种以概念为基础的解释方法来证明这种现象:TCAV和面对面特征可视化。我们通过仔细透视正在调查的概念实例表明,我们可以从根本上改变可解释方法的产出。我们提出的攻击既可以引起积极的解释(Polka点是模型对斑马进行分类时的一个重要概念),也可以引起负面解释(三角点不是确定斑马图像的重要因素)。我们的工作突出表明,在安全临界的应用中,不仅需要围绕机器学习管道的安全,而且还需要示范解释过程的安全。