In this paper, we introduced the novel concept of advisor network to address the problem of noisy labels in image classification. Deep neural networks (DNN) are prone to performance reduction and overfitting problems on training data with noisy annotations. Weighting loss methods aim to mitigate the influence of noisy labels during the training, completely removing their contribution. This discarding process prevents DNNs from learning wrong associations between images and their correct labels but reduces the amount of data used, especially when most of the samples have noisy labels. Differently, our method weighs the feature extracted directly from the classifier without altering the loss value of each data. The advisor helps to focus only on some part of the information present in mislabeled examples, allowing the classifier to leverage that data as well. We trained it with a meta-learning strategy so that it can adapt throughout the training of the main model. We tested our method on CIFAR10 and CIFAR100 with synthetic noise, and on Clothing1M which contains real-world noise, reporting state-of-the-art results.
翻译:在本文中,我们引入了顾问网络的新概念,以解决图像分类中的噪音标签问题。深神经网络(DNN)容易降低性能,用噪音说明过度适应培训数据的问题。加权损失方法旨在减轻培训期间噪音标签的影响,完全消除它们的贡献。这种丢弃过程防止DNN学习图像与其正确标签之间的错误联系,但减少了数据使用量,特别是当大多数样本有噪音标签时。不同的是,我们的方法在不改变每项数据损失价值的情况下衡量从分类器直接提取的特征。该顾问帮助只关注错误标出的例子中的某些信息,使分类者也能利用这些数据。我们用一个元学习战略对它进行了培训,以便它在整个主要模型的培训中能够适应。我们用合成噪音对CIFAR10和CIFAR100的方法进行了测试,在含有真实世界噪音、报告最新结果的Flood1M上进行了测试。