Recent works have shown that self-supervised learning can achieve remarkable robustness when integrated with adversarial training (AT). However, the robustness gap between supervised AT (sup-AT) and self-supervised AT (self-AT) remains significant. Motivated by this observation, we revisit existing self-AT methods and discover an inherent dilemma that affects self-AT robustness: either strong or weak data augmentations are harmful to self-AT, and a medium strength is insufficient to bridge the gap. To resolve this dilemma, we propose a simple remedy named DYNACL (Dynamic Adversarial Contrastive Learning). In particular, we propose an augmentation schedule that gradually anneals from a strong augmentation to a weak one to benefit from both extreme cases. Besides, we adopt a fast post-processing stage for adapting it to downstream tasks. Through extensive experiments, we show that DYNACL can improve state-of-the-art self-AT robustness by 8.84% under Auto-Attack on the CIFAR-10 dataset, and can even outperform vanilla supervised adversarial training for the first time. Our code is available at \url{https://github.com/PKU-ML/DYNACL}.
翻译:最近的工作表明,自我监督的学习如果与对抗性培训相结合,就能取得显著的稳健性。然而,受监督的AT(高级AT)和自监督的AT(自我AT)之间的稳健性差距仍然很大。受这一观察的驱动,我们重新审视现有的自执法方法,发现一个影响自执法强力的固有的两难困境:强弱的数据增强对自我执法有害,而中度的强度不足以弥合这一差距。为了解决这一难题,我们提议了一个名为DYNACL(动态反向反向学习)的简单补救措施。特别是,我们提议了一个从强力增强逐渐从弱力逐渐到弱力的增强时间表,以便从这两种极端情况中获益。此外,我们采用了一个快速的后处理阶段,使之适应下游任务。通过广泛的实验,我们证明DYNACL(DYNACL)可以改进最先进的自执法强性,在CFAR-10数据集(Aut-Atack)下提高8.84%,在第一次时甚至超越VANLADR/MLA/MLDR{MR{MR}我们的代码。</s>