Semi-supervised learning (SSL) has achieved great success in leveraging a large amount of unlabeled data to learn a promising classifier. A popular approach is pseudo-labeling that generates pseudo labels only for those unlabeled data with high-confidence predictions. As for the low-confidence ones, existing methods often simply discard them because these unreliable pseudo labels may mislead the model. Nevertheless, we highlight that these data with low-confidence pseudo labels can be still beneficial to the training process. Specifically, although the class with the highest probability in the prediction is unreliable, we can assume that this sample is very unlikely to belong to the classes with the lowest probabilities. In this way, these data can be also very informative if we can effectively exploit these complementary labels, i.e., the classes that a sample does not belong to. Inspired by this, we propose a novel Contrastive Complementary Labeling (CCL) method that constructs a large number of reliable negative pairs based on the complementary labels and adopts contrastive learning to make use of all the unlabeled data. Extensive experiments demonstrate that CCL significantly improves the performance on top of existing methods. More critically, our CCL is particularly effective under the label-scarce settings. For example, we yield an improvement of 2.43% over FixMatch on CIFAR-10 only with 40 labeled data.
翻译:半监督的学习(SSL)在利用大量未贴标签的数据学习有希望的分类者方面取得了巨大成功。 流行的方法是假标签,只为那些有高度自信预测的未贴标签数据生成假标签。 至于低信任者,现有方法往往只是丢弃它们,因为这些不可靠的伪标签可能误导模型。 然而,我们强调,这些带有低信任伪标签的数据仍然能够有益于培训进程。具体地说,虽然预测中概率最高的类类数据不可靠,但我们可以假定,这一样本极不可能属于概率最低的类。这样,如果我们能够有效利用这些互补标签,即样本不属于的类,这些数据也可能非常丰富。受此启发,我们提出了一种新颖的对比性补充标签(CCL)方法,该方法仅根据补充标签建立大量可靠的底底色,并采用对比性学习方法来使用所有未贴标签的数据。 广泛实验显示,CCL43 和我们40级标签的顶端的底部数据将大大改进。