Deep neural networks (DNNs) are vulnerable to adversarial examples with small perturbations. Adversarial defense thus has been an important means which improves the robustness of DNNs by defending against adversarial examples. Existing defense methods focus on some specific types of adversarial examples and may fail to defend well in real-world applications. In practice, we may face many types of attacks where the exact type of adversarial examples in real-world applications can be even unknown. In this paper, motivated by that adversarial examples are more likely to appear near the classification boundary, we study adversarial examples from a new perspective that whether we can defend against adversarial examples by pulling them back to the original clean distribution. We theoretically and empirically verify the existence of defense affine transformations that restore adversarial examples. Relying on this, we learn a defense transformer to counterattack the adversarial examples by parameterizing the affine transformations and exploiting the boundary information of DNNs. Extensive experiments on both toy and real-world datasets demonstrate the effectiveness and generalization of our defense transformer.
翻译:深神经网络(DNN)很容易受到有小扰动的对抗性例子的伤害。因此,反向防御是一个重要的手段,通过对抗性例子进行辩护,提高DNN的稳健性。现有的辩护方法侧重于某些特定的对抗性例子,在现实世界应用中可能无法很好地加以辩护。在实践中,我们可能面临许多类型的攻击,在现实世界应用中,确切类型的对抗性例子甚至可能不为人知。在本文中,由于对抗性例子更有可能出现在分类边界附近,我们从一个新的角度研究对抗性例子的对抗性例子,即我们能否通过把它们拉回原始的清洁分布来防御性例子。我们从理论上和经验上核实了防御性形形形形色变的存在,从而恢复了对抗性对抗性例子。我们以此为基础,学会了一种防御性变异变变器,以参数为参数,并利用DNN的边界信息。关于玩具和现实世界数据集的广泛实验显示了我们防御变换器的有效性和一般化。