Deep neural networks are vulnerable to adversarial examples, even in the black-box setting where the attacker is only accessible to the model output. Recent studies have devised effective black-box attacks with high query efficiency. However, such performance is often accompanied by compromises in attack imperceptibility, hindering the practical use of these approaches. In this paper, we propose to restrict the perturbations to a small salient region to generate adversarial examples that can hardly be perceived. This approach is readily compatible with many existing black-box attacks and can significantly improve their imperceptibility with little degradation in attack success rate. Further, we propose the Saliency Attack, a new black-box attack aiming to refine the perturbations in the salient region to achieve even better imperceptibility. Extensive experiments show that compared to the state-of-the-art black-box attacks, our approach achieves much better imperceptibility scores, including most apparent distortion (MAD), $L_0$ and $L_2$ distances, and also obtains significantly higher success rates judged by a human-like threshold on MAD. Importantly, the perturbations generated by our approach are interpretable to some extent. Finally, it is also demonstrated to be robust to different detection-based defenses.
翻译:纵深神经网络很容易受到对抗性实例的影响,即使在攻击者只能进入模型输出的黑箱环境中,攻击者也很容易受到对抗性实例的影响。最近的研究已经设计出有效的黑箱攻击,其查询效率很高。然而,这种表现往往伴随着攻击不易感知的妥协,妨碍了这些方法的实际使用。在本文中,我们建议将扰动限制在一个小的显要区域,以产生几乎无法察觉的敌对性实例。这一方法很容易与许多现有的黑箱攻击相容,并且可以大大提高其不易感知性,在攻击成功率中也很少退化。此外,我们提议进行 " 敬重攻击 ",这是一次新的黑箱攻击,目的是改进突出区域的扰动性,以达到更难受性。广泛的实验表明,与最先进的黑箱攻击相比,我们的方法取得了更难辨识性的分数,包括最明显的扭曲(MAD)、 $L_0和$L_2美元的距离,并且获得某种类似人类的临界值所判断的成功率要高得多。 最终,通过对MAD的防御性的方法进行了不同的解释。