This work aims at improving instance retrieval with self-supervision. We find that fine-tuning using the recently developed self-supervised (SSL) learning methods, such as SimCLR and MoCo, fails to improve the performance of instance retrieval. In this work, we identify that the learnt representations for instance retrieval should be invariant to large variations in viewpoint and background etc., whereas self-augmented positives applied by the current SSL methods can not provide strong enough signals for learning robust instance-level representations. To overcome this problem, we propose InsCLR, a new SSL method that builds on the \textit{instance-level} contrast, to learn the intra-class invariance by dynamically mining meaningful pseudo positive samples from both mini-batches and a memory bank during training. Extensive experiments demonstrate that InsCLR achieves similar or even better performance than the state-of-the-art SSL methods on instance retrieval. Code is available at https://github.com/zeludeng/insclr.
翻译:这项工作的目的是用自我监督来改进实例检索。 我们发现,使用最近开发的自我监督(SSL)学习方法(如SimCLR和MoCo)进行微调,无法改善实例检索的性能。 在这项工作中,我们确定,学习到的演示,例如检索,应当与观点和背景等的巨大差异无异,而目前SSL方法采用的自我推荐的正面效果不能为学习强力实例级演示提供足够强大的信号。为了解决这一问题,我们建议InsCLR, 一种建立在\ text{instance-level}对比基础上的新的SSL方法,通过动态地挖掘微型战壕和记忆库中具有实际意义的假正样来学习阶级中的变异。 广泛的实验表明,InsCLR在实例检索方面的表现与State-of-art SLSL方法相似甚至更好。 代码可在 https://github.com/zeluden/insclr查阅 https://giuthub. com/zerung/inscr查阅。