Even though deep neural networks succeed on many different tasks including semantic segmentation, they lack on robustness against adversarial examples. To counteract this exploit, often adversarial training is used. However, it is known that adversarial training with weak adversarial attacks (e.g. using the Fast Gradient Method) does not improve the robustness against stronger attacks. Recent research shows that it is possible to increase the robustness of such single-step methods by choosing an appropriate step size during the training. Finding such a step size, without increasing the computational effort of single-step adversarial training, is still an open challenge. In this work we address the computationally particularly demanding task of semantic segmentation and propose a new step size control algorithm that increases the robustness of single-step adversarial training. The proposed algorithm does not increase the computational effort of single-step adversarial training considerably and also simplifies training, because it is free of meta-parameter. We show that the robustness of our approach can compete with multi-step adversarial training on two popular benchmarks for semantic segmentation.
翻译:尽管深神经网络在包括语义分割在内的许多不同任务上取得成功,但它们缺乏对抗对抗性实例的稳健性。为了抵制这种利用,通常使用对抗性培训。然而,众所周知,以较弱的对抗性攻击进行对抗性训练(例如,使用快速渐进法)并不能提高对抗性攻击的稳健性。最近的研究表明,通过在培训中选择适当的步骤尺寸,可以提高这种单步方法的稳健性。在不增加单步对抗性训练的计算努力的情况下,发现这样的步数仍然是一个公开的挑战。在这项工作中,我们处理计算上特别艰巨的语义分割任务,并提出新的步数控制算法,以提高单步对抗性对抗性训练的稳健性。拟议的算法不会大大增加单步对抗性训练的计算性努力,而且会简化培训,因为单步对抗性训练没有元参数。我们表明,我们方法的稳健性能与两种流行的语义分割基准的多步对立性对抗性训练相竞争。