We propose self-adaptive training -- a unified training algorithm that dynamically calibrates and enhances training process by model predictions without incurring extra computational cost -- to advance both supervised and self-supervised learning of deep neural networks. We analyze the training dynamics of deep networks on training data that are corrupted by, e.g., random noise and adversarial examples. Our analysis shows that model predictions are able to magnify useful underlying information in data and this phenomenon occurs broadly even in the absence of \emph{any} label information, highlighting that model predictions could substantially benefit the training process: self-adaptive training improves the generalization of deep networks under noise and enhances the self-supervised representation learning. The analysis also sheds light on understanding deep learning, e.g., a potential explanation of the recently-discovered double-descent phenomenon in empirical risk minimization and the collapsing issue of the state-of-the-art self-supervised learning algorithms. Experiments on the CIFAR, STL and ImageNet datasets verify the effectiveness of our approach in three applications: classification with label noise, selective classification and linear evaluation. To facilitate future research, the code has been made public available at https://github.com/LayneH/self-adaptive-training.
翻译:我们提议进行自我适应培训 -- -- 一种统一的培训算法,通过模型预测动态地校准和加强培训过程,而不产生额外的计算费用 -- -- 以推进深神经网络的监督和自我监督的学习;我们分析关于因随机噪音和对抗性例子等而腐蚀的培训数据深网络的培训动态;我们的分析表明,模型预测能够扩大数据中的有用基本信息,即使在没有标签信息的情况下,这种现象也广泛发生;强调模型预测可以大大有利于培训进程:自我适应培训可以改善在噪音下深网络的普及,提高自我监督的演示学习;我们的分析还揭示了对深层次学习的理解,例如,对最近发现的在尽量减少实验风险方面暴露的双日现象的潜在解释,以及州级自我监督的学习算法的崩溃问题;关于CIHFAR、STL和图像网络的实验,可以大大有利于培训进程:自我适应培训培训,提高深层网络在噪音下的普及程度,提高自我监督的代表学习能力;分析还说明了如何理解最近发现的双日现象在尽量减少经验风险方面可能发生的解释;以及“最先进的自我监督的学习算法”的崩溃问题。关于CIHFAR、STL和图像网络数据集数据库数据库数据库数据库数据库数据库在三种应用中证实我们的方法的有效性:在三种应用中的分类、使用。