Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations. It uses pairs of augmentations of unlabeled training examples to define a classification task for pretext learning of a deep embedding. Despite extensive works in augmentation procedures, prior works do not address the selection of challenging negative pairs, as images within a sampled batch are treated independently. This paper addresses the problem, by introducing a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE. When compared to standard CL, the use of adversarial examples creates more challenging positive pairs and adversarial training produces harder negative pairs by accounting for all images in a batch during the optimization. CLAE is compatible with many CL methods in the literature. Experiments show that it improves the performance of several existing CL baselines on multiple datasets.
翻译:对比性学习(CL)是自我监督的视觉表现学习的一种流行技术,它使用两种未贴标签的培训实例来界定一个分类任务,作为深层嵌入的借口。尽管在增强程序方面做了大量工作,但先前的工作并没有涉及选择具有挑战性的负对,因为抽样一组中的图像是独立处理的。本文通过引入一套新的紧张性学习的对抗性实例来解决这个问题,并利用这些实例来定义新的SLS对抗性培训算法,称为CLAE。与标准的CLL相比,使用对抗性实例产生了更具挑战性的正面对子,而对抗性培训则通过在优化过程中对所有成批的图像进行核算,产生了更难的负对子。CLAE与文献中的许多CL方法相兼容。实验表明,它改进了多个数据集上现有的CL基准的性能。