Deep learning methods have gained increased attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdge and AdvEdge$^{+}$, that deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against two deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the attack implementation using various attack frameworks. We also explore the potential countermeasures against such attacks. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.
翻译:深层次的学习方法由于各种应用的杰出表现,在各种应用中日益受到更多的注意。为了探讨这种高性能如何与正确使用数据文物和准确制定特定任务的问题有关,解释模型已成为开发深层次学习系统的一个关键组成部分。解释模型有助于理解深层次学习模型的内部作用,为发现投入数据中滥用文物的现象提供了安全感。与预测模型一样,解释模型也容易得到对抗性投入。这项工作引入了两次攻击,即AdvEdge和AdvEdge$ $ $,这两次攻击既欺骗了目标深层次学习模型,又欺骗了同时解释模型。我们评估了两个深层次学习模型以及代表不同类别解释模型的四种解释模型拟议攻击的实效。我们的实验包括利用各种攻击框架实施攻击。我们还探索了对付这种攻击的潜在对策。我们的分析表明我们攻击在洞察深层次学习模型及其解释者方面的有效性,并突出了改进和规避攻击的洞察力。