Adversarial training (AT) for robust representation learning and self-supervised learning (SSL) for unsupervised representation learning are two active research fields. Integrating AT into SSL, multiple prior works have accomplished a highly significant yet challenging task: learning robust representation without labels. A widely used framework is adversarial contrastive learning which couples AT and SSL, and thus constitute a very complex optimization problem. Inspired by the divide-and-conquer philosophy, we conjecture that it might be simplified as well as improved by solving two sub-problems: non-robust SSL and pseudo-supervised AT. This motivation shifts the focus of the task from seeking an optimal integrating strategy for a coupled problem to finding sub-solutions for sub-problems. With this said, this work discards prior practices of directly introducing AT to SSL frameworks and proposed a two-stage framework termed Decoupled Adversarial Contrastive Learning (DeACL). Extensive experimental results demonstrate that our DeACL achieves SOTA self-supervised adversarial robustness while significantly reducing the training time, which validates its effectiveness and efficiency. Moreover, our DeACL constitutes a more explainable solution, and its success also bridges the gap with semi-supervised AT for exploiting unlabeled samples for robust representation learning. The code is publicly accessible at https://github.com/pantheon5100/DeACL.
翻译:为不受监督的代表性学习和自我监督的自我监督学习提供有力的代表性学习和自我监督学习(SSL)的Aversari培训(AT)是两个积极的研究领域。将ATS纳入SSL,许多先前的工作完成了一项非常重要但具有挑战性的任务:学习强健的代表性,没有标签。一个广泛使用的框架是对抗性的对比性学习,夫妇ATS和SSL是这种学习,因此构成一个非常复杂的优化问题。在分而治之哲学的启发下,我们推测,通过解决两个次级问题,它可以简化和改进这种学习:非机器人的SSL和假的AT。这一动机将任务的重点从寻求最佳的结合问题整合战略转向寻找次级问题的次级解决方案。这项工作摒弃了以前直接将ATSL框架引入SL的做法,并提出了一个称为Dcoupupuled 反向对比学习(DeACL)的两阶段框架。广泛的实验结果表明,我们的DEACL实现了SOTA的自我监督性对抗性强势性学习5,同时大大缩短了培训的进度。