Self-supervised learning holds promise in leveraging large amounts of unlabeled data, however much of its progress has thus far been limited to highly curated pre-training data such as ImageNet. We explore the effects of contrastive learning from larger, less-curated image datasets such as YFCC, and find there is indeed a large difference in the resulting representation quality. We hypothesize that this curation gap is due to a shift in the distribution of image classes -- which is more diverse and heavy-tailed -- resulting in less relevant negative samples to learn from. We test this hypothesis with a new approach, Divide and Contrast (DnC), which alternates between contrastive learning and clustering-based hard negative mining. When pretrained on less curated datasets, DnC greatly improves the performance of self-supervised learning on downstream tasks, while remaining competitive with the current state-of-the-art on curated datasets.
翻译:自我监督的学习在利用大量未贴标签的数据方面很有希望,然而,迄今为止,其大部分进展都局限于诸如图像网络等高度成熟的训练前数据。我们探索了从大型、低级图像数据集(如YFCC)中进行对比学习的效果,发现由此产生的表述质量确实存在很大差异。我们假设这种曲线差距是由于图像类别分布的变化 -- -- 其多样性和篇幅更重 -- -- 导致较少相关的负面样本学习。我们用一种新的方法来测试这一假设,即差异和对比(DnC),它替代了对比式学习和基于集群的硬式负面挖掘。在对较慢的数据集进行预先训练时,DnC大大改进了在下游任务上自我监督学习的绩效,同时保持了与当前在整理数据集方面最先进的竞争能力。