Current SOTA adversarially robust models are mostly based on adversarial training (AT) and differ only by some regularizers either at inner maximization or outer minimization steps. Being repetitive in nature during the inner maximization step, they take a huge time to train. We propose a non-iterative method that enforces the following ideas during training. Attribution maps are more aligned to the actual object in the image for adversarially robust models compared to naturally trained models. Also, the allowed set of pixels to perturb an image (that changes model decision) should be restricted to the object pixels only, which reduces the attack strength by limiting the attack space. Our method achieves significant performance gains with a little extra effort (10-20%) over existing AT models and outperforms all other methods in terms of adversarial as well as natural accuracy. We have performed extensive experimentation with CIFAR-10, CIFAR-100, and TinyImageNet datasets and reported results against many popular strong adversarial attacks to prove the effectiveness of our method.
翻译:目前,SOTA的对抗性强型模型大多以对抗性培训为基础,只有一些正规化者在内部最大化或外部最小化步骤上表现出差异。在内部最大化步骤中,它们具有重复的性质,因此需要大量时间来培训。我们建议一种非说明性方法,在培训期间执行下列想法。与自然培训模型相比,分级图更符合敌对性强型模型图像中的实际对象。此外,允许的一组像素只应限于物体像素(改变性模型决定),仅通过限制攻击空间来减少攻击强度。我们的方法取得了显著的性能收益,略加努力(10-20 % ), 在对抗性能和自然精度方面超越了所有其他方法。我们与CIFAR-10、CIFAR-100和TinyImageNet数据集进行了广泛的实验,并报告了许多针对民众强烈对抗性攻击的结果,以证明我们的方法的有效性。