Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples. Adversarial examples are malicious images with visually imperceptible perturbations. While these carefully crafted perturbations restricted with tight $\Lp$ norm bounds are small, they are still easily perceivable by humans. These perturbations also have limited success rates when attacking black-box models or models with defenses like noise reduction filters. To solve these problems, we propose Demiguise Attack, crafting ``unrestricted'' perturbations with Perceptual Similarity. Specifically, we can create powerful and photorealistic adversarial examples by manipulating semantic information based on Perceptual Similarity. Adversarial examples we generate are friendly to the human visual system (HVS), although the perturbations are of large magnitudes. We extend widely-used attacks with our approach, enhancing adversarial effectiveness impressively while contributing to imperceptibility. Extensive experiments show that the proposed method not only outperforms various state-of-the-art attacks in terms of fooling rate, transferability, and robustness against defenses but can also improve attacks effectively. In addition, we also notice that our implementation can simulate illumination and contrast changes that occur in real-world scenarios, which will contribute to exposing the blind spots of DNNs.
翻译:深心神经网络 (DNNS) 被发现很容易受到对抗性实例的伤害。 反面的例子有恶意的图像, 其视觉上无法察觉的扰动。 虽然这些精心设计的扰动例子受严格$\Lp$规范界限限制, 但仍很容易为人类所察觉。 这些扰动在用减少噪音过滤器等防线攻击黑盒模型或模型时成功率也很有限。 为了解决这些问题, 我们提议进行 Demigise 攻击, 设计“ 不受限制的” 侵入, 具有视觉上的相似性。 具体地说, 我们可以通过调控基于感性相似性的语义信息来创建强大和真实现实主义的对抗性例子。 我们产生的反动性例子对人类视觉系统(HVS)是友好的。 尽管扰动程度很大。 我们以我们的方法推广广泛使用的攻击, 提高对抗性效果, 同时也有助于不易察觉性。 广泛的实验表明, 拟议的方法不仅超越了各种状态的状态, 并且能够改进我们真实性攻击的防御性, 也能够有效地改进我们真实性攻击的防御性。