Adversarial training (AT) has become a widely recognized defense mechanism to improve the robustness of deep neural networks against adversarial attacks. It solves a min-max optimization problem, where the minimizer (i.e., defender) seeks a robust model to minimize the worst-case training loss in the presence of adversarial examples crafted by the maximizer (i.e., attacker). However, the min-max nature makes AT computationally intensive and thus difficult to scale. Meanwhile, the FAST-AT algorithm, and in fact many recent algorithms that improve AT, simplify the min-max based AT by replacing its maximization step with the simple one-shot gradient sign based attack generation step. Although easy to implement, FAST-AT lacks theoretical guarantees, and its practical performance can be unsatisfactory, suffering from the robustness catastrophic overfitting when training with strong adversaries. In this paper, we propose to design FAST-AT from the perspective of bi-level optimization (BLO). We first make the key observation that the most commonly-used algorithmic specification of FAST-AT is equivalent to using some gradient descent-type algorithm to solve a bi-level problem involving a sign operation. However, the discrete nature of the sign operation makes it difficult to understand the algorithm performance. Based on the above observation, we propose a new tractable bi-level optimization problem, design and analyze a new set of algorithms termed Fast Bi-level AT (FAST-BAT). FAST-BAT is capable of defending sign-based projected gradient descent (PGD) attacks without calling any gradient sign method and explicit robust regularization. Furthermore, we empirically show that our method outperforms state-of-the-art FAST-AT baselines, by achieving superior model robustness without inducing robustness catastrophic overfitting, or suffering from any loss of standard accuracy.
翻译:Adversari Adversari 培训(AT) 已成为一个得到广泛承认的国防机制,目的是提高深层神经神经网络抵御对抗性攻击的稳健性能。它解决了一个微量最大优化问题,即最小化者(即捍卫者)寻求一个强大的模型,以尽量减少最坏情况的培训损失,而面对由最大化者(即攻击者)设计的对抗性实例。然而,微量成份性质使得AT的计算密集,因而难以扩大规模。与此同时,FAST-AT的强性算法,以及事实上许多最近改进AT、简化以最小值为基础的AT的算法,简化了它的基础,用简单的单发梯梯梯的梯度标志性攻击生成步骤来取代其最大化步骤。尽管实施起来容易,FAST-AT缺乏理论保证,其实际性表现可能不令人满意,因为强敌培训时的强性过于完美。 在本文中,我们提议从双级模型优化的角度设计FAST-AT-AT(BLO) 。我们首先指出,任何常用的AST-AT 最坏的算法性标准性规范化的比重度, AT-直径化的亚化的值值值值值值值值值水平等于使用某种直径化的直径化的直压操作操作性能的操作性能的动作性能的动作性能使得我们不代表性能性能能性能能能能性能性能性能能能性能性能性能的动作性能能能的动作性能能能能能性能性能的动作性能的动作能能向向上显示一个测试性能性能性能性能性能。