Adversarial training (AT) has been demonstrated to be effective in improving model robustness by leveraging adversarial examples for training. However, most AT methods are in face of expensive time and computational cost for calculating gradients at multiple steps in generating adversarial examples. To boost training efficiency, fast gradient sign method (FGSM) is adopted in fast AT methods by calculating gradient only once. Unfortunately, the robustness is far from satisfactory. One reason may arise from the initialization fashion. Existing fast AT generally uses a random sample-agnostic initialization, which facilitates the efficiency yet hinders a further robustness improvement. Up to now, the initialization in fast AT is still not extensively explored. In this paper, we boost fast AT with a sample-dependent adversarial initialization, i.e., an output from a generative network conditioned on a benign image and its gradient information from the target network. As the generative network and the target network are optimized jointly in the training phase, the former can adaptively generate an effective initialization with respect to the latter, which motivates gradually improved robustness. Experimental evaluations on four benchmark databases demonstrate the superiority of our proposed method over state-of-the-art fast AT methods, as well as comparable robustness to advanced multi-step AT methods. The code is released at https://github.com//jiaxiaojunQAQ//FGSM-SDI.
翻译:Adversarial Adversarial 培训(AT)已证明,通过利用对抗性培训实例,提高模型稳健性是有效的,但大多数AT方法在利用对抗性培训的对抗性实例时,都证明能够有效地提高模型稳健性;然而,大多数AT方法都面临在生成对抗性实例的多个步骤中计算梯度梯度的昂贵时间和计算成本;为了提高培训效率,快速AT方法(FGSM)只采用一次计算梯度的快速AT方法(FGSM)在快速的AT方法中采用。不幸的是,稳健性强度可能来自初始化方式,一个原因可能来自初始化方式。现有的快速TAT通常使用随机抽样/无异性初始化方法,这有利于提高效率,但又阻碍进一步稳健性改进。到目前为止,ATAT的快速初始化尚未广泛探讨。在本文中,我们快速推进基于样本的对抗性初始化初始化的初始化方法,即以良性图像和目标网络的梯度信息为基础,在培训阶段联合优化了基因缩网络和目标网络,前者能够适应后一种有效的初始化,从而逐渐增强稳健健健健性。