Inspired by the remarkable zero-shot generalization capacity of vision-language pre-trained model, we seek to leverage the supervision from CLIP model to alleviate the burden of data labeling. However, such supervision inevitably contains the label noise, which significantly degrades the discriminative power of the classification model. In this work, we propose Transductive CLIP, a novel framework for learning a classification network with noisy labels from scratch. Firstly, a class-conditional contrastive learning mechanism is proposed to mitigate the reliance on pseudo labels and boost the tolerance to noisy labels. Secondly, ensemble labels is adopted as a pseudo label updating strategy to stabilize the training of deep neural networks with noisy labels. This framework can reduce the impact of noisy labels from CLIP model effectively by combining both techniques. Experiments on multiple benchmark datasets demonstrate the substantial improvements over other state-of-the-art methods.
翻译:在经过培训的视觉语言模型的显著零光概括能力的启发下,我们寻求利用CLIP模型的监督来减轻数据标签的负担,然而,这种监督不可避免地包含标签噪音,这大大降低了分类模型的歧视性力量。在这项工作中,我们提议了Trantion CLIP,这是一个从头到尾学习带有噪音标签的分类网络的新框架。首先,建议了一个有等级条件的对比学习机制,以减少对假标签的依赖,并促进对噪音标签的容忍度。第二,采用合用标签作为假标签更新战略,以稳定用噪音标签对深层神经网络的培训。这个框架可以通过将两种技术结合起来,有效减少CLIP模型的噪音标签的影响。对多个基准数据集的实验表明,其他最先进的方法有了实质性改进。