Adversarial training (AT) formulated as the minimax optimization problem can effectively enhance the model's robustness against adversarial attacks. The existing AT methods mainly focused on manipulating the inner maximization for generating quality adversarial variants or manipulating the outer minimization for designing effective learning objectives. However, empirical results of AT always exhibit the robustness at odds with accuracy and the existence of the cross-over mixture problem, which motivates us to study some label randomness for benefiting the AT. First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT. Second, based on the observations, we propose a simple but effective method -- NoiLIn that randomly injects NLs into training data at each training epoch and dynamically increases the NL injection rate once robust overfitting occurs. Empirically, NoiLIn can significantly mitigate the AT's undesirable issue of robust overfitting and even further improve the generalization of the state-of-the-art AT methods. Philosophically, NoiLIn sheds light on a new perspective of learning with NLs: NLs should not always be deemed detrimental, and even in the absence of NLs in the training set, we may consider injecting them deliberately. Codes are available in https://github.com/zjfheart/NoiLIn.
翻译:作为小型最大优化问题而拟订的Adversarial培训(AT)能够有效地提高模型对对抗性攻击的稳健性。现有的AT方法主要侧重于操纵内部最大化,以产生高质量的对抗变体或操纵外部最小化,以设计有效的学习目标。然而,AT的经验结果总是显示强性与准确性和交叉混合问题的存在不相符,这促使我们研究某些标签随机性,以利AT。首先,我们分别彻底调查在AT内部最大化和外部最小化中注入噪音标签(NLs),并获得NL注入好处时的观察。第二,我们根据观察,建议一种简单而有效的方法 -- -- NoiLIn,即随机将NLs输入每次培训的数据,在强性过度时会增加NL的注射率。NILIn可以大大减轻AT的强性过度调整甚至进一步改进AT方法的普遍化问题。Philosphyal,NoiLIns建议一种简单有效的方法 -- -- 随机性地将NLs输入每次培训数据,在坚固的NubLs 中可以考虑我们如何学习Ncols。在Ncolis/toal中经常地研究。NubLs。在Ncolism 中应该从有害性地研究它们。我们如何学习。