Black-box attacks usually face two problems: poor transferability and the inability to evade the adversarial defense. To overcome these shortcomings, we create an original approach to generate adversarial examples by smoothing the linear structure of the texture in the benign image, called AdvSmo. We construct the adversarial examples without relying on any internal information to the target model and design the imperceptible-high attack success rate constraint to guide the Gabor filter to select appropriate angles and scales to smooth the linear texture from the input images to generate adversarial examples. Benefiting from the above design concept, AdvSmo will generate adversarial examples with strong transferability and solid evasiveness. Finally, compared to the four advanced black-box adversarial attack methods, for the eight target models, the results show that AdvSmo improves the average attack success rate by 9% on the CIFAR-10 and 16% on the Tiny-ImageNet dataset compared to the best of these attack methods.
翻译:黑盒攻击通常面临两个问题: 易转移性差和无法逃避对抗性辩护。 为了克服这些缺陷, 我们创建了一种原始方法, 通过平滑良性图象AdvSmo 的线性结构来生成对抗性实例。 我们构建了对抗性实例, 不依赖于目标模型的任何内部信息, 并设计了无法察觉的高攻击成功率限制, 以引导 Gabor 过滤器从输入图像中选择适当的角度和尺度来平滑线性纹理以生成对抗性例子。 从上述设计概念中受益, AdvSmo 将产生强烈可转移性和可靠蒸发性的对抗性范例。 最后, 与八种目标模型的四种先进的黑盒对抗性攻击方法相比, 结果显示, AdvSmo 将CIFAR- 10 和 Tiniy- ImageNet 数据集的平均攻击成功率提高9%, 与这些攻击方法的最佳方法相比, 16 % 。