Recently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to push two augmentations of an image (positive pairs) closer compared to other random images (negative pairs). We argue that not all random images are equal. Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. We iteratively distill a slowly evolving teacher model to the student model by capturing the similarity of a query image to some random images and transferring that knowledge to the student. We argue that our method is less constrained compared to recent contrastive learning methods, so it can learn better features. Specifically, our method should handle unbalanced and unlabeled data better than existing contrastive learning methods, because the randomly chosen negative set might include many samples that are semantically similar to the query image. In this case, our method labels them as highly similar while standard contrastive methods label them as negative pairs. Our method achieves comparable results to the state-of-the-art models. We also show that our method performs better in the settings where the unlabeled data is unbalanced. Our code is available here: https://github.com/UMBCvision/ISD.
翻译:最近,对比式学习在自我监督的学习中取得了巨大成果,其主要想法是将图像(正对)的两种放大比其他随机图像(负对)更接近于其他随机图像(负对)。我们争辩说,并非所有随机图像都是相等的。因此,我们引入了自我监督的学习算法,对负面图像使用软相似性,而不是对正对和负对的二进制区分。我们通过捕捉查询图像与某些随机图像的相似性并将知识传递给学生,反复将一个缓慢演变的教师模式与学生模式相提并论。我们认为,我们的方法比最近的对比式学习方法要少一些限制,这样可以学习更好的特征。具体地说,我们的方法应该比现有的对比式学习方法更好地处理不平衡和未贴标签的数据,因为随机选择的负面组合可能包含许多与正对和负对立图像相似的样本。在这种情况下,我们的方法将其标记为高度相似,而标准对比方法则将其标记为负对。我们的方法可以与最近的对比性学习方法相近。我们的方法比得比较,这样可以学习更好的方法。我们的方法应该学习更好的特性。我们的方法应该比现有的方法比得更好。我们的方法应该比现有的偏向偏向,比现有的对比式数据,我们这里的模型。我们的方法可以进行更好的处理得更好,这里的模型。我们的方法可以比较。我们的方法,这里的模型,就是不偏差/不偏差的模型。