In the presence of noisy or incorrect labels, neural networks have the undesirable tendency to memorize information about the noise. Standard regularization techniques such as dropout, weight decay or data augmentation sometimes help, but do not prevent this behavior. If one considers neural network weights as random variables that depend on the data and stochasticity of training, the amount of memorized information can be quantified with the Shannon mutual information between weights and the vector of all training labels given inputs, $I(w ; \mathbf{y} \mid \mathbf{x})$. We show that for any training algorithm, low values of this term correspond to reduction in memorization of label-noise and better generalization bounds. To obtain these low values, we propose training algorithms that employ an auxiliary network that predicts gradients in the final layers of a classifier without accessing labels. We illustrate the effectiveness of our approach on versions of MNIST, CIFAR-10, and CIFAR-100 corrupted with various noise models, and on a large-scale dataset Clothing1M that has noisy labels.
翻译:在噪音或不正确的标签存在的情况下,神经网络有记住噪音信息的不良倾向。标准正规化技术,例如辍学、体重衰减或数据增强有时有帮助,但并不妨碍这种行为。如果将神经网络重量视为随机变量,取决于数据和培训的随机变量,则可以使用香农的份数和所有培训标签的矢量之间的相互信息来量化记忆信息的数量。 我们用各种噪音模型腐蚀的MNIST、CIFAR-10和CIFAR-100版本方法的有效性,以及具有高压标签的大型数据系统SOLA1M。