Contrastive learning has gained popularity as an effective self-supervised representation learning technique. Several research directions improve traditional contrastive approaches, e.g., prototypical contrastive methods better capture the semantic similarity among instances and reduce the computational burden by considering cluster prototypes or cluster assignments, while adversarial instance-wise contrastive methods improve robustness against a variety of attacks. To the best of our knowledge, no prior work jointly considers robustness, cluster-wise semantic similarity and computational efficiency. In this work, we propose SwARo, an adversarial contrastive framework that incorporates cluster assignment permutations to generate representative adversarial samples. We evaluate SwARo on multiple benchmark datasets and against various white-box and black-box attacks, obtaining consistent improvements over state-of-the-art baselines.
翻译:一些研究方向改进了传统的对比性方法,例如,典型的对比性方法可以更好地捕捉各种案例之间的语义相似性,并通过考虑群集原型或群集任务来减少计算负担,而对抗性的对比性方法则可以增强抵御各种攻击的稳健性。据我们所知,以前的任何工作都不能共同考虑稳健性、集群式语义相似性和计算效率。在这项工作中,我们建议SWARO(SWARO)(SWARO)(SWARO)(SWARO)(SWARO)(SWARO)(SWARO))(SWARO)(SWARO)(SWARO),这是一个包含群集分配的对比性对比性框架,以生成具有代表性的对立样本。我们用多个基准数据集对SWARO(SWARO)进行评估,并反对各种白箱和黑箱攻击,在最先进的基线上取得了一致的改进。