Despite the fundamental distinction between adversarial and natural training (AT and NT), AT methods generally adopt momentum SGD (MSGD) for the outer optimization. This paper aims to analyze this choice by investigating the overlooked role of outer optimization in AT. Our exploratory evaluations reveal that AT induces higher gradient norm and variance compared to NT. This phenomenon hinders the outer optimization in AT since the convergence rate of MSGD is highly dependent on the variance of the gradients. To this end, we propose an optimization method called ENGM which regularizes the contribution of each input example to the average mini-batch gradients. We prove that the convergence rate of ENGM is independent of the variance of the gradients, and thus, it is suitable for AT. We introduce a trick to reduce the computational cost of ENGM using empirical observations on the correlation between the norm of gradients w.r.t. the network parameters and input examples. Our extensive evaluations and ablation studies on CIFAR-10, CIFAR-100, and TinyImageNet demonstrate that ENGM and its variants consistently improve the performance of a wide range of AT methods. Furthermore, ENGM alleviates major shortcomings of AT including robust overfitting and high sensitivity to hyperparameter settings.
翻译:尽管对抗性和自然培训(AT和NT)之间存在根本的区别,但AT方法通常采用动力SGD(MSGD)来优化外部优化,本文的目的是通过调查AT公司忽视的外部优化作用来分析这一选择。我们的探索性评估表明,AT公司引致的梯度标准值和差异高于NT。这种现象阻碍了AT公司的外部优化,因为MSGD的趋同率在很大程度上取决于梯度的差异。为此,我们提议一种称为ENGM的优化方法,它规范了每项输入实例对平均微型散变梯度的贡献。我们证明ENGM的趋同率与梯度的差异无关,因此适合AT公司。我们引入了一个花招,利用对梯度标准(w.r.t.)网络参数和输入实例之间相互关系的经验性观察来降低ENGM的计算成本。我们关于CIFAR-10、CIFAR-100和TinyImageNet的广泛评价和对比研究表明,ENGM及其变式不断改进AT方法的广泛范围的性能性,因此适合AT的。此外,ENGM还利用网络参数的高度敏感度,包括高清晰度,减轻了AT的缺陷。