To train robust deep neural networks (DNNs), we systematically study several target modification approaches, which include output regularisation, self and non-self label correction (LC). Two key issues are discovered: (1) Self LC is the most appealing as it exploits its own knowledge and requires no extra models. However, how to automatically decide the trust degree of a learner as training goes is not well answered in the literature? (2) Some methods penalise while the others reward low-entropy predictions, prompting us to ask which one is better? To resolve the first issue, taking two well-accepted propositions--deep neural networks learn meaningful patterns before fitting noise [3] and minimum entropy regularisation principle [10]--we propose a novel end-to-end method named ProSelfLC, which is designed according to learning time and entropy. Specifically, given a data point, we progressively increase trust in its predicted label distribution versus its annotated one if a model has been trained for enough time and the prediction is of low entropy (high confidence). For the second issue, according to ProSelfLC, we empirically prove that it is better to redefine a meaningful low-entropy status and optimise the learner toward it. This serves as a defence of entropy minimisation. We demonstrate the effectiveness of ProSelfLC through extensive experiments in both clean and noisy settings. The source code is available at https://github.com/XinshaoAmosWang/ProSelfLC-CVPR2021. Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation
翻译:为了培养强大的深心神经网络(DNNS),我们系统地研究若干目标修改方法,其中包括产出正规化、自我和非自我标签校正(LC),发现了两个关键问题:(1) 自我LC最吸引,因为它利用了自己的知识,不需要额外的模型。然而,如何自动决定学习者的信任度,因为培训没有很好地回答?(2) 一些方法惩罚,而另一些方法则奖励低湿度预测,促使我们问哪个更好? 解决第一个问题,采用两个得到良好接受的主张 -- -- 深心神经网络在安装噪音[3]和最低通缩校正原则[10]之前学习有意义的模式。我们提出一个新的端到端方法,名为Pro OferLC,该方法的设计是为了学习时间和节奏。具体地说,考虑到一个数据点,我们逐渐增加人们对其预测标签分发的信任,而如果一个模型已经培训足够时间,而且精度的数值是低调的,则预测是低调的。关于第二个问题,在ProlicLC,我们从实践中学习一个更有意义的规则。