Adversarial training is an effective method to boost model robustness to malicious, adversarial attacks. However, such improvement in model robustness often leads to a significant sacrifice of standard performance on clean images. In many real-world applications such as health diagnosis and autonomous surgical robotics, the standard performance is more valued over model robustness against such extremely malicious attacks. This leads to the question: To what extent we can boost model robustness without sacrificing standard performance? This work tackles this problem and proposes a simple yet effective transfer learning-based adversarial training strategy that disentangles the negative effects of adversarial samples on model's standard performance. In addition, we introduce a training-friendly adversarial attack algorithm, which facilitates the boost of adversarial robustness without introducing significant training complexity. Extensive experimentation indicates that the proposed method outperforms previous adversarial training algorithms towards the target: to improve model robustness while preserving model's standard performance on clean data.
翻译:反向培训是提高模型稳健性以对付恶意和对抗性攻击的有效方法。然而,模型稳健性这一改进往往导致在清洁图像上大幅牺牲标准性能。在许多现实世界应用中,如健康诊断和自主手术机器人,标准性能比这种极端恶意攻击的模型稳健性更受重视。这导致一个问题:在不牺牲标准性能的情况下,我们在多大程度上可以提高模型稳健性?这项工作解决这个问题,并提出一个简单而有效的转移基于学习的对抗性培训战略,将对抗性样本对模型标准性能的不利影响分开来。此外,我们引入了一种有利于培训的对抗性攻击算法,这种算法有利于增强对抗性强性,而不会引入重要的培训复杂性。广泛的实验表明,拟议的方法比以往的对抗性培训算法更符合目标:在保持模型稳健性的同时保持模型在清洁数据方面的标准性能。