Instance discrimination based contrastive learning has emerged as a leading approach for self-supervised learning of visual representations. Yet, its generalization to novel tasks remains elusive when compared to representations learned with supervision, especially in the few-shot setting. We demonstrate how one can incorporate supervision in the instance discrimination based contrastive self-supervised learning framework to learn representations that generalize better to novel tasks. We call our approach CIDS (Contrastive Instance Discrimination with Supervision). CIDS performs favorably compared to existing algorithms on popular few-shot benchmarks like Mini-ImageNet or Tiered-ImageNet. We also propose a novel model selection algorithm that can be used in conjunction with a universal embedding trained using CIDS to outperform state-of-the-art algorithms on the challenging Meta-Dataset benchmark.
翻译:基于对比性学习的事例歧视已成为自我监督的视觉表现学习的主导方法,然而,与通过监督,特别是在少数镜头环境中所学的表述相比,这种对新任务的普遍化仍然难以实现。我们展示了如何将基于对比性自我监督学习框架的监督纳入实例中,以了解对新任务的更好概括化的表述。我们称其为CIDS(在监督下对程序歧视)。CIDS(在监督下对程序歧视)。相对于迷你-ImagageNet或Tiered-ImagageNet等流行的微小基准的现有算法而言,CIDS表现得较好。我们还提出了一种新的模式选择算法,可以与利用CIDS培训的普遍嵌入相结合,在挑战的Met-Dataset基准上优于最新算法。