In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalance learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones. Our code is available at \url{https://github.com/jiequancui/Parametric-Contrastive-Learning}.
翻译:在本文中,我们提出了解决长期差异认知的参数对比学习(Paco) 。 根据理论分析,我们观察到受监督的对比损失倾向于偏向高频类,从而增加了不平衡学习的难度。我们引入了一组从优化角度重新平衡的参数类可学习中心。此外,我们在平衡的环境下分析我们的Paco损失。我们的分析表明,Paco可以适应性地提高同一类样本的推力强度,因为更多的样本会与相应的中心一起拉动,并有益于坚韧的学习。在长期销售的CIFAR、图像网、地点和iNatulist 上进行的实验显示2018年新的艺术状态,以便长期识别。在全图像网上,用Paco损失模型培训的模型超过了ResNet各骨干部监督的对比学习。我们的代码可在<url{https://github.com/jiequancui/Paraconticatri-Constrative-Lear}查阅。