Contrastive learning (CL) can learn generalizable feature representations and achieve the state-of-the-art performance of downstream tasks by finetuning a linear classifier on top of it. However, as adversarial robustness becomes vital in image classification, it remains unclear whether or not CL is able to preserve robustness to downstream tasks. The main challenge is that in the self-supervised pretraining + supervised finetuning paradigm, adversarial robustness is easily forgotten due to a learning task mismatch from pretraining to finetuning. We call such a challenge 'cross-task robustness transferability'. To address the above problem, in this paper we revisit and advance CL principles through the lens of robustness enhancement. We show that (1) the design of contrastive views matters: High-frequency components of images are beneficial to improving model robustness; (2) Augmenting CL with pseudo-supervision stimulus (e.g., resorting to feature clustering) helps preserve robustness without forgetting. Equipped with our new designs, we propose AdvCL, a novel adversarial contrastive pretraining framework. We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency. With a thorough experimental study, we demonstrate that AdvCL outperforms the state-of-the-art self-supervised robust learning methods across multiple datasets (CIFAR-10, CIFAR-100, and STL-10) and finetuning schemes (linear evaluation and full model finetuning).
翻译:对比性学习(CL) 可以通过对上方的线性分类器进行微调,学习通用特征表现,并实现下游任务的最新表现。然而,随着对抗性稳健性在图像分类中变得至关重要,CL能否保持对下游任务的稳健性仍然不清楚。主要挑战是,在自我监督的训练前前阶段和受监督的微调模式中,由于从培训前阶段到微调的学习任务不匹配,对抗性强健性很容易被遗忘。我们称之为“跨任务稳健性可转移性”的挑战。为了解决上述问题,我们在本文件中通过增强稳健性镜头重新审视和推进CL原则。我们表明:(1) 对比性观点的设计:高频率的图像部分有利于改进模型稳健性;(2) 以伪监督性刺激(例如,采用特征组合)来保持稳健性强性。我们用新设计来完善AdvCL, 一个新的对准性前期对比性测试框架。我们表明,AdvCL能够通过增强超强性成本性、超前级的自我评估方法,并演示性、超度、超度的RRRU性数据性、超度自我测试、超度自我测试性数据性、超前评估。