Deep neural networks are prone to overfitting noisy labels, resulting in poor generalization performance. To overcome this problem, we present a simple and effective method self-ensemble label correction (SELC) to progressively correct noisy labels and refine the model. We look deeper into the memorization behavior in training with noisy labels and observe that the network outputs are reliable in the early stage. To retain this reliable knowledge, SELC uses ensemble predictions formed by an exponential moving average of network outputs to update the original noisy labels. We show that training with SELC refines the model by gradually reducing supervision from noisy labels and increasing supervision from ensemble predictions. Despite its simplicity, compared with many state-of-the-art methods, SELC obtains more promising and stable results in the presence of class-conditional, instance-dependent, and real-world label noise. The code is available at https://github.com/MacLLL/SELC.
翻译:深神经网络容易过度安装噪音标签,导致不善于概括性工作。 为了克服这一问题,我们提出了一个简单而有效的方法自合标签校正(SELC),以逐步纠正噪音标签和完善模型。我们更深入地研究在使用噪音标签培训过程中的记忆行为,并观察网络产出在早期阶段是可靠的。为了保留这一可靠的知识,SELC使用由网络产出指数移动平均数形成的共同预测,以更新原始噪音标签。我们显示,SELC的培训通过逐步减少来自噪音标签的监管,并增加来自联合预测的监督,使模型更加精细。尽管与许多最先进的方法相比,SELC在课堂条件、案例依赖性和真实世界标签噪音面前取得了更有希望和稳定的结果。该代码可在https://github.com/MacLLLL/SELC上查阅。