In this paper, we introduce a novel self-supervised learning (SSL) loss for image representation learning. There is a growing belief that generalization in deep neural networks is linked to their ability to discriminate object shapes. Since object shape is related to the location of its parts, we propose to detect those that have been artificially misplaced. We represent object parts with image tokens and train a ViT to detect which token has been combined with an incorrect positional embedding. We then introduce sparsity in the inputs to make the model more robust to occlusions and to speed up the training. We call our method DILEMMA, which stands for Detection of Incorrect Location EMbeddings with MAsked inputs. We apply DILEMMA to MoCoV3, DINO and SimCLR and show an improvement in their performance of respectively 4.41%, 3.97%, and 0.5% under the same training time and with a linear probing transfer on ImageNet-1K. We also show full fine-tuning improvements of MAE combined with our method on ImageNet-100. We evaluate our method via fine-tuning on common SSL benchmarks. Moreover, we show that when downstream tasks are strongly reliant on shape (such as in the YOGA-82 pose dataset), our pre-trained features yield a significant gain over prior work.
翻译:在本文中, 我们引入了一种新的自我监督学习损失( SSL) 图像演示学习。 人们越来越相信, 深神经网络的普遍化与其歧视对象形状的能力相关。 由于对象形状与部件的位置相关, 我们提议检测那些人为错位的部件。 我们用图像符号代表物体部件, 并训练VIT来检测哪些符号与不正确的定位嵌入相混合。 然后在输入中引入松散, 使模型更加稳健地覆盖并加快培训。 我们称之为我们的方法DILEMMA, 即用MAsked 输入检测不正确的位置 EMbedings。 我们用 DILEMMA 与MOV3、 DINO 和 SimCLR 相关联, 并显示其性能在相同的培训时间里提高了4. 41% 、 3. 97% 和 0.5% 的性能, 并在图像Net-1K 上进行线性传输。 我们还展示了对MAE的改进与我们图像网络- 100 的方法相结合。 我们用微调的方法评估了我们的方法, 通过微调的方法, 调整了我们共同的MSL 82 标准, 我们的SLA 的排序, 展示了我们是如何在前的 上的重要的 。</s>