Classical adversarial training (AT) frameworks are designed to achieve high adversarial accuracy against a single attack type, typically $\ell_\infty$ norm-bounded perturbations. Recent extensions in AT have focused on defending against the union of multiple perturbations but this benefit is obtained at the expense of a significant (up to $10\times$) increase in training complexity over single-attack $\ell_\infty$ AT. In this work, we expand the capabilities of widely popular single-attack $\ell_\infty$ AT frameworks to provide robustness to the union of ($\ell_\infty, \ell_2, \ell_1$) perturbations while preserving their training efficiency. Our technique, referred to as Shaped Noise Augmented Processing (SNAP), exploits a well-established byproduct of single-attack AT frameworks -- the reduction in the curvature of the decision boundary of networks. SNAP prepends a given deep net with a shaped noise augmentation layer whose distribution is learned along with network parameters using any standard single-attack AT. As a result, SNAP enhances adversarial accuracy of ResNet-18 on CIFAR-10 against the union of ($\ell_\infty, \ell_2, \ell_1$) perturbations by 14%-to-20% for four state-of-the-art (SOTA) single-attack $\ell_\infty$ AT frameworks, and, for the first time, establishes a benchmark for ResNet-50 and ResNet-101 on ImageNet.
翻译:传统的对抗性培训框架(AT)旨在针对单一攻击类型实现高对抗性精确度,通常为$@@infty$ inty$ AT框架,目的是针对单一攻击类型(通常为$ell_infty$) 实现高对抗性精确度。AT最近的扩展侧重于防范多次扰动的结合,但这一好处的获得却以大幅(高达10美元)的单一攻击培训复杂性增加为代价(最高为$@ell_infty$ AT)。在这项工作中,我们扩大了广受欢迎的单一攻击 $\ell_infty AT 框架的能力,以强力配合(hell_20infty, el_2, ell_1美元)的联盟,SANPA_18_altial Profirmal-reaxional restium,SANPA_18_real-real-real-real-real-real-real-real-ral-ral-ral_ax) 14-SNAI_ral-ral-ral-ral-sal-ral-ral-reval-ral-s_al-rupal-s_al-ral-s_al-ral-ral-ral-s_al-s_al-ral-ral-s_al-s_al-ral-sxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx_SNASNAMA_Q_xxxxxxxxx___====AS_Q_Q_AS_Q_AS-r_r_ral_ral_ral_ral_ral_r_r_r_al_al_al_r_r_r_r_r_r_r_r_r_r_r_r_al_r_r_r_al_al_al_al_al_al_al_al-s