There is a family of label modification approaches including self and non-self label correction (LC), and output regularisation. They are widely used for training robust deep neural networks (DNNs), but have not been mathematically and thoroughly analysed together. We study them and discover three key issues: (1) We are more interested in adopting Self LC as it leverages its own knowledge and requires no auxiliary models. However, it is unclear how to adaptively trust a learner as the training proceeds. (2) Some methods penalise while the others reward low-entropy (i.e., high-confidence) predictions, prompting us to ask which one is better. (3) Using the standard training setting, a learned model becomes less confident when severe noise exists. Self LC using high-entropy knowledge would generate high-entropy targets. To resolve the issue (1), inspired by a well-accepted finding, i.e., deep neural networks learn meaningful patterns before fitting noise, we propose a novel end-to-end method named ProSelfLC, which is designed according to the learning time and prediction entropy. Concretely, for any data point, we progressively and adaptively trust its predicted probability distribution versus its annotated one if a network has been trained for a relatively long time and the prediction is of low entropy. For the issue (2), the effectiveness of ProSelfLC defends entropy minimisation. By ProSelfLC, we empirically prove that it is more effective to redefine a semantic low-entropy state and optimise the learner toward it. To address the issue (3), we decrease the entropy of self knowledge using a low temperature before exploiting it to correct labels, so that the revised labels redefine low-entropy target probability distributions. We demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings, and on both image and protein datasets.
翻译:有一套标签修改方法,包括自我和非自我标签校正(LC)和产出规范化。这些方法被广泛用于培训强大的深度神经网络(DNNs),但没有得到数学和透彻的分析。我们研究并发现三个关键问题:(1) 我们更有兴趣采用自我LC,因为它利用自己的知识,不需要辅助模型。然而,在培训进行之前,我们不清楚如何适应性地信任一个学习者。 (2) 有些方法惩罚其他方法,奖励低血压(即高自信)预测,促使我们询问哪个是更好的。(3) 使用标准的自我选择设置,一个学习的模型在出现严重噪音时会变得不那么自信。我们使用高血压知识来生成高血压目标。为了解决问题(1),根据一个公认的发现,也就是说,深层的神经网络在安装噪音之前会学会有意义的模式。我们建议一种叫“低血压”的低血压终端法,它根据学习的时间设计,并预测一个更低血压的模型。 确切地说,如果在任何数据点上,我们用一个经过长期训练的轨道上,我们不断预测的轨道上,一个预测的概率流值,我们就会显示一个不断变的轨道。