As deep neural networks can easily overfit noisy labels, robust training in the presence of noisy labels is becoming an important challenge in modern deep learning. While existing methods address this problem in various directions, they still produce unpredictable sub-optimal results since they rely on the posterior information estimated by the feature extractor corrupted by noisy labels. Lipschitz regularization successfully alleviates this problem by training a robust feature extractor, but it requires longer training time and expensive computations. Motivated by this, we propose a simple yet effective method, called ALASCA, which efficiently provides a robust feature extractor under label noise. ALASCA integrates two key ingredients: (1) adaptive label smoothing based on our theoretical analysis that label smoothing implicitly induces Lipschitz regularization, and (2) auxiliary classifiers that enable practical application of intermediate Lipschitz regularization with negligible computations. We conduct wide-ranging experiments for ALASCA and combine our proposed method with previous noise-robust methods on several synthetic and real-world datasets. Experimental results show that our framework consistently improves the robustness of feature extractors and the performance of existing baselines with efficiency. Our code is available at https://github.com/jongwooko/ALASCA.
翻译:由于深心神经网络可以轻易地过度安装噪音标签,因此,在噪音标签面前进行强有力的培训正在成为现代深层学习的一个重要挑战。虽然现有方法在各种方向上解决这一问题,但它们仍然产生难以预测的亚最佳结果,因为它们依赖地貌提取器所腐蚀的外表信息; Lipschitz 正规化通过训练强大的地物提取器成功地缓解了这一问题,但需要较长的培训时间和昂贵的计算。我们为此提议了一个简单而有效的方法,称为ALACASA,在标签噪音下有效提供强健的地物提取器。ALASACA综合了两个关键要素:(1) 适应性标签,根据我们的理论分析,即贴上“滑动”的标签,隐含地诱导着Lipschitz正规化,以及(2) 辅助性分类器,使中间的Lipschitz正规化得到实际应用,但可忽略的计算。我们为ALASCA进行了广泛的实验,并将我们提出的方法与以前在若干合成和现实世界数据集中使用的噪音-robust方法结合起来。实验结果显示,我们的框架不断改进地物提取器的坚固性和现有基线的效能。