The studies on black-box adversarial attacks have become increasingly prevalent due to the intractable acquisition of the structural knowledge of deep neural networks (DNNs). However, the performance of emerging attacks is negatively impacted when fooling DNNs tailored for high-resolution images. One of the explanations is that these methods usually focus on attacking the entire image, regardless of its spatial semantic information, and thereby encounter the notorious curse of dimensionality. To this end, we propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA. Firstly, we take only one of every two neighboring pixels in the salient region as the target by leveraging the attentional mechanism and pixel correlation of images, such that the dimension of the black-box attack reduces. After that, a general multiobjective evolutionary algorithm is employed to traverse the reduced pixels and generate perturbations that are imperceptible by the human vision. Extensive experimental results have verified the effectiveness of the proposed PICA on the ImageNet dataset. More importantly, PICA is computationally more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
翻译:由于难以获得深神经网络的结构知识,黑盒对抗性攻击的研究越来越普遍。然而,当欺骗专为高分辨率图像设计的DNN时,新出现的攻击的性能受到不利影响。其中一个解释是,这些方法通常侧重于攻击整个图像,而不管其空间语义信息如何,从而遇到臭名昭著的维度诅咒。为此,我们提议采用像素的基于关联的黑盒对抗性攻击,称为 PICA。首先,我们只将突出区域的每两个相邻像素中的一个作为目标,利用关注机制和图像的像素相关性,使黑盒攻击的维度降低。之后,采用一般的多目标进化算法绕过减少的像素,并产生人类视觉无法察觉的扰动。广泛的实验结果证实了在图像网络数据集上拟议的PICA的有效性。更重要的是,PICA在计算上比现有的黑盒攻击更高效地生成高分辨率的对抗性例子。