Recent feature contrastive learning (FCL) has shown promising performance in self-supervised representation learning. For domain adaptation, however, FCL cannot show overwhelming gains since the class weights are not involved during optimization, which does not guarantee the produced features to be clustered around the class weights learned from source data. To tackle this issue, we propose a novel probability contrastive learning (PCL) in this paper, which not only produces compact features but also enforces them to be distributed around the class weights. Specifically, we propose to use the output probabilities after softmax to perform contrastive learning instead of the extracted features and remove the $\ell_{2}$ normalization in the traditional FCL. In this way, the probability will approximate the one-hot form, thereby narrowing the distance between the features and the class weights. Our proposed PCL is simple and effective. We conduct extensive experiments on two domain adaptation tasks, i.e., unsupervised domain adaptation and semi-supervised domain adaptation. The results on multiple datasets demonstrate that our PCL can consistently get considerable gains and achieves the state-of-the-art performance. In addition, our method also obtains considerable gains on semi-supervised tasks when labeled data is scarce.
翻译:最近的特征对比学习(FCL)显示,在自我监督的代表学习(FCL)中表现良好。然而,对于领域适应,FCL无法显示巨大的收益,因为在优化过程中没有涉及班级重量,这并不能保证将产生的特征围绕从源数据中吸取的班级重量进行分组。为了解决这一问题,我们提议在本文件中采用新的概率对比学习(PCL),不仅产生紧凑特征,而且还强制将它们分布在班级重量上。具体地说,我们提议使用软体后的产出概率来进行对比学习,而不是提取的特征,并消除传统的FCL($\ell\%2}美元)的常规化。这样,概率将接近一热形式,从而缩小特征与班级重量之间的距离。我们提议的PCL是简单有效的。我们在两个领域适应任务上进行了广泛的实验,即不受监督的域适应和半超强域适应。多数据集的结果表明,我们的PCL能够持续取得相当大的收益,并在获得半级数据时实现州-级化任务。