Transfer-based adversarial example is one of the most important classes of black-box attacks. However, there is a trade-off between transferability and imperceptibility of the adversarial perturbation. Prior work in this direction often requires a fixed but large $\ell_p$-norm perturbation budget to reach a good transfer success rate, leading to perceptible adversarial perturbations. On the other hand, most of the current unrestricted adversarial attacks that aim to generate semantic-preserving perturbations suffer from weaker transferability to the target model. In this work, we propose a geometry-aware framework to generate transferable adversarial examples with minimum changes. Analogous to model selection in statistical machine learning, we leverage a validation model to select the optimal perturbation budget for each image under both the $\ell_{\infty}$-norm and unrestricted threat models. Extensive experiments verify the effectiveness of our framework on balancing imperceptibility and transferability of the crafted adversarial examples. The methodology is the foundation of our entry to the CVPR'21 Security AI Challenger: Unrestricted Adversarial Attacks on ImageNet, in which we ranked 1st place out of 1,559 teams and surpassed the runner-up submissions by 4.59% and 23.91% in terms of final score and average image quality level, respectively. Code is available at https://github.com/Equationliu/GA-Attack.
翻译:以转移为基础的对抗性实例是最重要的黑箱攻击类别之一。 但是,在对抗性扰动的可转移性和不可理解性之间存在着一种权衡。 先前朝此方向开展的工作往往需要一个固定但大为$@ p$- norm 扰动预算,以达到良好的转移成功率,从而导致可以察觉到的对抗性扰动。 另一方面,目前大多数旨在生成语义保留扰动的无限制对抗性对立性攻击都因向目标模式转移能力较弱而受到影响。 在这项工作中,我们提议了一个几何觉意识框架,以生成可转移的对抗性例子,但变化最小。在统计机学习中,对模型的选择,我们利用一个验证模式,为每个图像选择最佳的扰动性预算,在$\ell ⁇ infty}美元和不受限制的威胁模式下,通过广泛的实验来验证我们关于平衡精度和易读性/可转移性的对抗性攻击性攻击性攻击性框架的有效性。 方法是我们进入CPR21“安全性可转移性对抗性辩论性辩论性”框架的基础, 在Squal- AI-regal-restrual Astrual Astrual listrual report list list list list list list list list list level legational legations laft laft left left lex lap lex lex 4, 在我们进入了C.