Successful training of deep neural networks with noisy labels is an essential capability as most real-world datasets contain some amount of mislabeled data. Left unmitigated, label noise can sharply degrade typical supervised learning approaches. In this paper, we present robust temporal ensembling (RTE), which combines robust loss with semi-supervised regularization methods to achieve noise-robust learning. We demonstrate that RTE achieves state-of-the-art performance across the CIFAR-10, CIFAR-100, ImageNet, WebVision, and Food-101N datasets, while forgoing the recent trend of label filtering and/or fixing. Finally, we show that RTE also retains competitive corruption robustness to unforeseen input noise using CIFAR-10-C, obtaining a mean corruption error (mCE) of 13.50% even in the presence of an 80% noise ratio, versus 26.9% mCE with standard methods on clean data.
翻译:对带有噪音标签的深层神经网络的成功培训是一项基本能力,因为大多数真实世界的数据集包含一定数量的错误标签数据。 不加缓解, 标签噪音会大幅降低典型的监管学习方法。 在本文中, 我们展示了强大的时间组合(RTE ), 将强力损失与半监管的规范化方法结合起来, 以实现噪音- 机器人学习。 我们显示, 实时技术公司在整个CIFAR- 10、 CIFAR- 100、 图像网、 WebVision 和 Food- 101N 数据集中取得了最先进的性能, 同时也保持了标签过滤和/ 或修补的最新趋势。 最后, 我们显示, RTE 也保留了竞争性腐败强度, 以便使用CIFAR- 10- C 进行意外的输入噪音, 获得13.50%的平均腐败错误( MCE), 即使存在80%的噪音比率, 而使用标准的清洁数据方法的为26.9% MCE 。