Adversarial training (AT) has become the de-facto standard to obtain models robust against adversarial examples. However, AT exhibits severe robust overfitting: cross-entropy loss on adversarial examples, so-called robust loss, decreases continuously on training examples, while eventually increasing on test examples. In practice, this leads to poor robust generalization, i.e., adversarial robustness does not generalize well to new examples. In this paper, we study the relationship between robust generalization and flatness of the robust loss landscape in weight space, i.e., whether robust loss changes significantly when perturbing weights. To this end, we propose average- and worst-case metrics to measure flatness in the robust loss landscape and show a correlation between good robust generalization and flatness. For example, throughout training, flatness reduces significantly during overfitting such that early stopping effectively finds flatter minima in the robust loss landscape. Similarly, AT variants achieving higher adversarial robustness also correspond to flatter minima. This holds for many popular choices, e.g., AT-AWP, TRADES, MART, AT with self-supervision or additional unlabeled examples, as well as simple regularization techniques, e.g., AutoAugment, weight decay or label noise. For fair comparison across these approaches, our flatness measures are specifically designed to be scale-invariant and we conduct extensive experiments to validate our findings.
翻译:Adversari Adversari 培训(AT)成为了与对抗性实例相比强健的模型的“实际标准”,然而,Adversari Adversari 培训(AT)已经成为了与对抗性实例相比强健的模型的“实际标准”。然而,AT显示出了严格的严格超标:对敌对性例子来说,所谓的强健损失,即所谓的强健损失,持续减少培训实例,而最终则增加试验实例。在实践中,这导致稳健的概括化不力,即对抗性强健的强力强力强力强力强力体积体积体积和强力体积体积体积之间的关系。为此,我们提出了衡量强健性损失场积体积平面的横向和最坏的衡量尺度。例如,AT-AWP、TIRS、MART、AT等平均和最差量体积体积度的衡量尺度,以自我监督或自我监督性标定的自我监督或自我监督性标定的比度方法,在整个培训过程中,平定下来。同样,其他的变相体积体积体积与自我监督或自我监督或自我监督性标法化方法。