Learning discriminative image representations plays a vital role in long-tailed image classification because it can ease the classifier learning in imbalanced cases. Given the promising performance contrastive learning has shown recently in representation learning, in this work, we explore effective supervised contrastive learning strategies and tailor them to learn better image representations from imbalanced data in order to boost the classification accuracy thereon. Specifically, we propose a novel hybrid network structure being composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers, where the learning is progressively transited from feature learning to the classifier learning to embody the idea that better features make better classifiers. We explore two variants of contrastive loss for feature learning, which vary in the forms but share a common idea of pulling the samples from the same class together in the normalized embedding space and pushing the samples from different classes apart. One of them is the recently proposed supervised contrastive (SC) loss, which is designed on top of the state-of-the-art unsupervised contrastive loss by incorporating positive samples from the same class. The other is a prototypical supervised contrastive (PSC) learning strategy which addresses the intensive memory consumption in standard SC loss and thus shows more promise under limited memory budget. Extensive experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.


翻译:在长尾图像分类中,学习有歧视性的图像表现在长尾图像分类中发挥着关键作用,因为它可以方便分类者在不平衡的案例中学习。鉴于近期在代表性学习中显示的有希望的业绩对比学习,我们探索了有效的、受监督的对比学习策略,并调整了这些策略,从不平衡的数据中学习更好的图像表现,以提高分类的准确性。具体地说,我们建议建立一个新型的混合网络结构,由受监督的对比损失组成,以学习图像表现和交叉热带损失,学习分类者,学习从特征学习逐渐过渡到分类者学习更好的特征,以体现特征学习的理念。我们探索了两种特征学习的对比性损失的变种,这些变种形式各不相同,但有一个共同的理念,就是将同一类的样本从正常的嵌入空间中提取同一类中的样本,并将不同类别中的样本推出。其中之一是最近提出的有监督的对比性损失(SC),其设计在从同一类中引入积极的对比性分类中逐步转换,从而展示了具有潜在监督的对比性对比性损失,从而展示了基于高额记忆的模型的模型,从而展示了基于高额记忆的大规模实验的大规模学习模型的模型。

9
下载
关闭预览

相关内容

【MIT】反偏差对比学习,Debiased Contrastive Learning
专知会员服务
90+阅读 · 2020年7月4日
【Google】监督对比学习,Supervised Contrastive Learning
专知会员服务
74+阅读 · 2020年4月24日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
164+阅读 · 2020年3月18日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
27+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
Hierarchical Disentangled Representations
CreateAMind
4+阅读 · 2018年4月15日
条件GAN重大改进!cGANs with Projection Discriminator
CreateAMind
8+阅读 · 2018年2月7日
Arxiv
10+阅读 · 2021年3月30日
Arxiv
31+阅读 · 2020年9月21日
VIP会员
Top
微信扫码咨询专知VIP会员