Adversarial training is an effective but time-consuming way to train robust deep neural networks that can withstand strong adversarial attacks. As a response to its inefficiency, we propose Dynamic Efficient Adversarial Training (DEAT), which gradually increases the adversarial iteration during training. We demonstrate that the gradient's magnitude correlates with the curvature of the trained model's loss landscape, allowing it to reflect the effect of adversarial training. Therefore, based on the magnitude of the gradient, we propose a general acceleration strategy, M+ acceleration, which enables an automatic and highly effective method of adjusting the training procedure. M+ acceleration is computationally efficient and easy to implement. It is suited for DEAT and compatible with the majority of existing adversarial training techniques. Extensive experiments have been done on CIFAR-10 and ImageNet datasets with various training environments. The results show that the proposed M+ acceleration significantly improves the training efficiency of existing adversarial training methods while achieving similar robustness performance. This demonstrates that the strategy is highly adaptive and offers a valuable solution for automatic adversarial training.
翻译:反向培训是一种有效但耗时的方法,用于培训能够抵御强烈对抗性攻击的强大深神经网络。我们提议,作为对低效率的回应,进行动态高效反向培训,逐步增加培训期间的对抗性迭代。我们证明,梯度与所培训模型损失场景的曲线相关,使其能反映对抗性培训的效果。因此,根据梯度的大小,我们提议了一个总体加速战略,即M+加速,使培训程序的调整自动和高度有效。M+加速具有计算效率,易于实施。它适合DEAT,并且与现有的对抗性培训技术的大多数兼容。已经对CIFAR-10和图像网络数据组进行了广泛的实验,与各种培训环境相适应。结果显示,拟议的M+加速大大提高了现有对抗性培训方法的培训效率,同时取得了类似的稳健性业绩。这表明,该战略具有高度的适应性,为自动对抗性培训提供了宝贵的解决办法。</s>