Recently demonstrated physical-world adversarial attacks have exposed vulnerabilities in perception systems that pose severe risks for safety-critical applications such as autonomous driving. These attacks place adversarial artifacts in the physical world that indirectly cause the addition of universal perturbations to inputs of a model that can fool it in a variety of contexts. Adversarial training is the most effective defense against image-dependent adversarial attacks. However, tailoring adversarial training to universal perturbations is computationally expensive since the optimal universal perturbations depend on the model weights which change during training. We propose meta adversarial training (MAT), a novel combination of adversarial training with meta-learning, which overcomes this challenge by meta-learning universal perturbations along with model training. MAT requires little extra computation while continuously adapting a large set of perturbations to the current model. We present results for universal patch and universal perturbation attacks on image classification and traffic-light detection. MAT considerably increases robustness against universal patch attacks compared to prior work.
翻译:最近展示的物理世界对抗性攻击暴露了对安全关键应用(如自主驾驶)构成严重风险的认知系统的脆弱性,这些攻击在物理世界中放置了对抗性人工制品,间接导致在一种模型投入中增加普遍扰动,这种模型在各种情况下可以愚弄它。反向培训是防止依赖图像的对抗性攻击的最有效防御手段。然而,将对抗性训练设计成普遍扰动,在计算上成本很高,因为最佳的普遍扰动取决于培训期间的变化的模型重量。我们提议采用对抗性对抗性训练(MAT),这是对抗性训练和元化学习的新组合,通过元化学习普遍扰动和示范培训克服这一挑战。MAT不需要额外计算,同时不断调整大量扰动对当前模型的干扰。我们介绍了对图像分类和交通灯光探测的普遍补和普遍扰动攻击的结果。MAT与以前的工作相比,对普遍补装攻击的力度大大提高。