In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones, e.g., our ResNet-200 achieves 81.8% top-1 accuracy. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.
翻译:在本文中,我们提出了解决长期差异认知的参数对比性学习(Paco) 。 根据理论分析,我们观察到受监督的对比性损失倾向于偏向高频类,从而增加了不平衡学习的难度。我们引入了一组从优化角度重新平衡的参数类可学习中心。此外,我们在平衡的环境下分析我们的Paco损失。我们的分析表明,Paco可以适应性地提高同一类的推力强度,因为更多的样品会与相应的中心一起拉动,并有益于硬例学习。在长期尾发的CIFAR、图像网、地点和iNatallist 2018 实验中展示了长期识别的新状态。在全成像网上,用Paco损失模型培训的模型超过了各种ResNet-200网脊椎的对比性学习,例如,我们的ResNet-200实现了81.8%的顶级精确度。我们的代码可以在 https://github.com/dvlab-researchation/Pararamid- Contrastrative-Lesting上查阅。