Recently, adversarial training has been incorporated in self-supervised contrastive pre-training to augment label efficiency with exciting adversarial robustness. However, the robustness came at a cost of expensive adversarial training. In this paper, we show a surprising fact that contrastive pre-training has an interesting yet implicit connection with robustness, and such natural robustness in the pre trained representation enables us to design a powerful robust algorithm against adversarial attacks, RUSH, that combines the standard contrastive pre-training and randomized smoothing. It boosts both standard accuracy and robust accuracy, and significantly reduces training costs as compared with adversarial training. We use extensive empirical studies to show that the proposed RUSH outperforms robust classifiers from adversarial training, by a significant margin on common benchmarks (CIFAR-10, CIFAR-100, and STL-10) under first-order attacks. In particular, under $\ell_{\infty}$-norm perturbations of size 8/255 PGD attack on CIFAR-10, our model using ResNet-18 as backbone reached 77.8% robust accuracy and 87.9% standard accuracy. Our work has an improvement of over 15% in robust accuracy and a slight improvement in standard accuracy, compared to the state-of-the-arts.
翻译:最近,在自我监督的对比性培训前,将对抗性培训纳入了自我监督的先期培训,以激动人心的对抗性强力增强标签效率,然而,稳健性是以昂贵的对抗性培训为代价的。在本文件中,我们显示出一个令人惊讶的事实,即对比性培训前培训与稳健性培训有着令人感兴趣但隐含的联系,而经过事先培训的代表的这种自然强健性使我们能够设计一个强大的强势算法,以对抗对抗对抗性攻击,俄罗斯,这种算法结合了标准的对比性培训前先行和随机的平滑。它提高了标准的准确性和稳健性,并大大降低了培训费用。我们利用广泛的经验研究来表明,拟议的俄罗斯联邦培训先行培训比对抗性培训的强的分类者要强,比共同基准(CIFAR-10、CIFAR-100和STL-10)有相当大的优势。特别是,在0.8/255 PGD对CIFAR-10号袭击的规模的中,以10号为主干度,我们使用ResNet-18号为主干线的模型的模型达到77.8%的准确性和87.9%的准确性,比标准精确性改进了15,比标准的精确性为微。我们的工作比标准精确性改进了15。