Recently demonstrated physical-world adversarial attacks have exposed vulnerabilities in perception systems that pose severe risks for safety-critical applications such as autonomous driving. These attacks place adversarial artifacts in the physical world that indirectly cause the addition of a universal patch to inputs of a model that can fool it in a variety of contexts. Adversarial training is the most effective defense against image-dependent adversarial attacks. However, tailoring adversarial training to universal patches is computationally expensive since the optimal universal patch depends on the model weights which change during training. We propose meta adversarial training (MAT), a novel combination of adversarial training with meta-learning, which overcomes this challenge by meta-learning universal patches along with model training. MAT requires little extra computation while continuously adapting a large set of patches to the current model. MAT considerably increases robustness against universal patch attacks on image classification and traffic-light detection.
翻译:最近展示的物理世界对抗性攻击暴露了认知系统的脆弱性,这些系统对安全关键应用(如自主驾驶)构成严重风险。这些攻击在物理世界中放置了对抗性文物,间接导致在一种模型投入中添加一个通用的补丁,这种输入在各种情况下可以愚弄它。反向训练是防止依赖图像的对抗性攻击的最有效防御手段。然而,将对抗性训练设计成通用补丁在计算上是昂贵的,因为最佳的通用补丁取决于培训期间变化的模型重量。我们提议采用对抗性训练(MAT),这是对抗性训练与元学习的新结合,通过元学习通用补丁和示范训练来克服这一挑战。MAT在不断调整大量补丁以适应当前模式的同时,几乎不需要额外的计算。MAT大大增强了对图像分类和交通灯光探测的普遍补丁攻击的稳健性。