We present a collaborative learning method called Mutual Contrastive Learning (MCL) for general visual representation learning. The core idea of MCL is to perform mutual interaction and transfer of contrastive distributions among a cohort of models. Benefiting from MCL, each model can learn extra contrastive knowledge from others, leading to more meaningful feature representations for visual recognition tasks. We emphasize that MCL is conceptually simple yet empirically powerful. It is a generic framework that can be applied to both supervised and self-supervised representation learning. Experimental results on supervised and self-supervised image classification, transfer learning and few-shot learning show that MCL can lead to consistent performance gains, demonstrating that MCL can guide the network to generate better feature representations.
翻译:我们提出了一种合作学习方法,称为 " 相互抵触学习(MCL) ",用于一般视觉演示学习。MCL的核心思想是进行相互互动,在一组模型中进行对比性分布的转让。从MCL中受益,每个模型都可以从其他模型中获取额外的对比性知识,从而在视觉识别任务中产生更有意义的特征表现。我们强调MCL在概念上简单,但经验上却强大。它是一个通用框架,既适用于受监督和自我监督的演示学习。在受监督和自我监督的图像分类、转移学习和少见的学习方面,实验性结果显示MCL能够带来一致的业绩收益,表明MCL能够指导网络产生更好的特征表现。