Adversarial training of Deep Neural Networks is known to be significantly more data-hungry when compared to standard training. Furthermore, complex data augmentations such as AutoAugment, which have led to substantial gains in standard training of image classifiers, have not been successful with Adversarial Training. We first explain this contrasting behavior by viewing augmentation during training as a problem of domain generalization, and further propose Diverse Augmentation-based Joint Adversarial Training (DAJAT) to use data augmentations effectively in adversarial training. We aim to handle the conflicting goals of enhancing the diversity of the training dataset and training with data that is close to the test distribution by using a combination of simple and complex augmentations with separate batch normalization layers during training. We further utilize the popular Jensen-Shannon divergence loss to encourage the joint learning of the diverse augmentations, thereby allowing simple augmentations to guide the learning of complex ones. Lastly, to improve the computational efficiency of the proposed method, we propose and utilize a two-step defense, Ascending Constraint Adversarial Training (ACAT), that uses an increasing epsilon schedule and weight-space smoothing to prevent gradient masking. The proposed method DAJAT achieves substantially better robustness-accuracy trade-off when compared to existing methods on the RobustBench Leaderboard on ResNet-18 and WideResNet-34-10. The code for implementing DAJAT is available here: https://github.com/val-iisc/DAJAT.
翻译:与标准培训相比,深神经网络的Adversarial培训已知明显比标准培训更加缺乏数据。此外,AutoAngation等复杂的数据增强系统在图像分类标准培训方面取得了巨大进展,但在Aversarial培训方面没有成功。我们首先通过将培训期间的增强作为广域化的一个问题来解释这种对比行为,并进一步提议在对抗性培训中有效地使用基于多样化增强能力的联合双向培训(DAJAT),以便提高拟议方法的计算效率。我们的目标是处理提高培训数据集多样性和与接近测试分布的数据培训相矛盾的目标,例如AutoAUAUAUAUAUAUAGA,在培训期间将简单复杂的增强系统与分批标准化层次相结合。我们进一步利用流行的Jensen-Shannon差异损失来鼓励联合学习各种增强系统,从而允许简单的增强能力指导复杂系统的学习。最后,我们提议并使用两步制防御,即AST-AT ATO-ADORI培训(ACAT)与测试接近测试分布式分布式分配方法,从而实现更稳性地平地平地平地平地平地平地平地交易。