Adversarial training (AT) based on minimax optimization is a popular learning style that enhances the model's adversarial robustness. Noisy labels (NL) commonly undermine the learning and hurt the model's performance. Interestingly, both research directions hardly crossover and hit sparks. In this paper, we raise an intriguing question -- Does NL always hurt AT? Firstly, we find that NL injection in inner maximization for generating adversarial data augments natural data implicitly, which benefits AT's generalization. Secondly, we find NL injection in outer minimization for the learning serves as regularization that alleviates robust overfitting, which benefits AT's robustness. To enhance AT's adversarial robustness, we propose "NoiLIn" that gradually increases \underline{Noi}sy \underline{L}abels \underline{In}jection over the AT's training process. Empirically, NoiLIn answers the previous question negatively -- the adversarial robustness can be indeed enhanced by NL injection. Philosophically, we provide a new perspective of the learning with NL: NL should not always be deemed detrimental, and even in the absence of NL in the training set, we may consider injecting it deliberately.
翻译:以小麦峰优化为基础的Aversarial Adversarial 培训(AT) 是一种流行的学习方式,它加强了模型的对抗性强力。 噪音标签通常会破坏学习,伤害模型的性能。 有趣的是, 两种研究方向几乎不会交叉, 并且触动了火花。 在本文中, 我们提出了一个令人感兴趣的问题 -- NL 总是伤害AT吗? 首先, 我们发现, 在生成对抗性数据的内部注入NL 最大化会间接地增加自然数据, 这有利于AT的概括化。 第二, 我们发现, 为学习而在外部注入NL 中注入NL 的对抗性强力会起到正规化作用, 缓解强健性, 有利于AT 的强力。 为了增强AT 对抗性强力, 我们建议“ NoILIn”, 逐渐增加AT 的内线 和下线 {L} {L} { { 直线{ { { } 插入。 首先, 我们发现, 我们发现, 隐含性, NoL 否定性地认为, NL 学习的新的视角是: NL 。