Few-shot learning aims to transfer information from one task to enable generalization on novel tasks given a few examples. This information is present both in the domain and the class labels. In this work we investigate the complementary roles of these two sources of information by combining instance-discriminative contrastive learning and supervised learning in a single framework called Supervised Momentum Contrastive learning (SUPMOCO). Our approach avoids a problem observed in supervised learning where information in images not relevant to the task is discarded, which hampers their generalization to novel tasks. We show that (self-supervised) contrastive learning and supervised learning are mutually beneficial, leading to a new state-of-the-art on the META-DATASET - a recently introduced benchmark for few-shot learning. Our method is based on a simple modification of MOCO and scales better than prior work on combining supervised and self-supervised learning. This allows us to easily combine data from multiple domains leading to further improvements.
翻译:少见的学习旨在从一个任务中传递信息,以便能够概括一些新任务。 这些信息既存在于领域,也存在于类类标签中。 在这项工作中,我们调查这两个信息来源的互补作用,将不同实例的对比性学习与监督学习结合到一个称为 " 监督动力差异学习(SUPMOCO) " 的单一框架内。我们的方法避免了在监督学习中观察到的问题,因为与任务无关的图像中的信息被丢弃,从而妨碍将其普遍化为新任务。我们显示(自我监督的)对比学习与监督学习是互利的,导致在META-DATASSET(MET-DATASET)上出现新的最新艺术状态,这是最近推出的少见学习基准。我们的方法基于对MOCO和比例的简单修改,比以前在将监督和自我监督的学习相结合方面做得更好。这使我们很容易地将来自多个领域的数据结合起来,从而导致进一步的改进。