Noisy labels, resulting from mistakes in manual labeling or webly data collecting for supervised learning, can cause neural networks to overfit the misleading information and degrade the generalization performance. Self-supervised learning works in the absence of labels and thus eliminates the negative impact of noisy labels. Motivated by co-training with both supervised learning view and self-supervised learning view, we propose a simple yet effective method called Co-learning for learning with noisy labels. Co-learning performs supervised learning and self-supervised learning in a cooperative way. The constraints of intrinsic similarity with the self-supervised module and the structural similarity with the noisily-supervised module are imposed on a shared common feature encoder to regularize the network to maximize the agreement between the two constraints. Co-learning is compared with peer methods on corrupted data from benchmark datasets fairly, and extensive results are provided which demonstrate that Co-learning is superior to many state-of-the-art approaches.
翻译:由人工标签或为监督学习收集的网上数据错误造成的噪音标签可导致神经网络过度适应误导信息并降低一般化性能; 在没有标签的情况下进行自我监督学习,从而消除噪音标签的负面影响; 通过以监督学习观点和自我监督学习观点共同培训,我们提出一个简单而有效的方法,称为 " 使用噪音标签共同学习 " ; 共同学习以合作方式进行监督学习和自我监督学习。 与自我监督模块内在相似性以及与新监督模块结构相似性的限制被强加在共同的通用功能编码器上,使网络正规化,以最大限度地实现两个限制之间的协议; 共同学习与基准数据集腐败数据的同行方法进行比较,并提供了广泛的结果,表明共同学习优于许多最先进的方法。