We approach the problem of improving robustness of deep learning algorithms in the presence of label noise. Building upon existing label correction and co-teaching methods, we propose a novel training procedure to mitigate the memorization of noisy labels, called CrossSplit, which uses a pair of neural networks trained on two disjoint parts of the dataset. CrossSplit combines two main ingredients: (i) Cross-split label correction. The idea is that, since the model trained on one part of the data cannot memorize example-label pairs from the other part, the training labels presented to each network can be smoothly adjusted by using the predictions of its peer network; (ii) Cross-split semi-supervised training. A network trained on one part of the data also uses the unlabeled inputs of the other part. Extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and mini-WebVision datasets demonstrate that our method can outperform the current state-of-the-art up to 90% noise ratio.
翻译:在现有标签校正和共同教学方法的基础上,我们提出一个新的培训程序,以缓解噪音标签的记忆化,称为CrossSplit,它使用在数据集两个脱节部分上受过训练的一对神经网络。CrossSplit将两个主要成分结合起来:(一) 跨版标签校正。这个想法是,由于由数据某一部分培训的模型无法将另一个部分的样条标签混为一数,因此,向每个网络提供的培训标签可以通过使用对同行网络的预测来顺利调整;(二) 交叉版半监督培训。一个接受数据培训的网络也使用另一部分未加标签的投入。关于CIFAR-10、CIFAR-100、Tiny-ImageNet和微型-WebVision数据集的广泛实验表明,我们的方法可以超越目前最先进的90%的噪音比率。