Recent feature contrastive learning (FCL) has shown promising performance in unsupervised representation learning. For the close-set representation learning where labeled data and unlabeled data belong to the same semantic space, however, FCL cannot show overwhelming gains due to not involving the class semantics during optimization. Consequently, the produced features do not guarantee to be easily classified by the class weights learned from labeled data although they are information-rich. To tackle this issue, we propose a novel probability contrastive learning (PCL) in this paper, which not only produces rich features but also enforces them to be distributed around the class prototypes. Specifically, we propose to use the output probabilities after softmax to perform contrastive learning instead of the extracted features in FCL. Evidently, such a way can exploit the class semantics during optimization. Moreover, we propose to remove the $\ell_{2}$ normalization in the traditional FCL and directly use the $\ell_{1}$-normalized probability for contrastive learning. Our proposed PCL is simple and effective. We conduct extensive experiments on three close-set image classification tasks, i.e., unsupervised domain adaptation, semi-supervised learning, and semi-supervised domain adaptation. The results on multiple datasets demonstrate that our PCL can consistently get considerable gains and achieves the state-of-the-art performance for all three tasks.
翻译:最近的特征对比学习( FCL) 显示在未监督的演示学习中表现良好。 但是,对于贴标签数据和未贴标签数据属于同一语义空间的近置代表学习,FCL由于在优化过程中没有涉及类语义学,无法显示巨大的收益。 因此, 生成的特征不能保证很容易地按照从标签数据中学习的类权重进行分类, 尽管它们信息丰富。 为了解决这个问题, 我们提议在本文中采用一个新的概率对比学习( PCL) 概率( PCL), 它不仅产生丰富的功能, 而且还强制在类原型中进行分布。 具体地说, 我们提议在软体格后使用产出概率来进行对比学习而不是FCL的提取特征。 显然, 这样的方法可以在优化过程中利用类语系的语义结构。 此外, 我们提议取消传统FCLL( $ell) 的常规值标准, 直接使用 $\ *$=1}( PCL) 常规的概率。 我们提议的PCL( PCL) 是简单而有效的。 我们提议在三种近地域域域里进行广泛的实验, 学习多变现。