Although deep neural networks (DNNs) have been shown to be susceptible to image-agnostic adversarial attacks on natural image classification problems, the effects of such attacks on DNN-based texture recognition have yet to be explored. As part of our work, we find that limiting the perturbation's $l_p$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images. Based on the fact that human perception is affected by local visual frequency characteristics, we propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain. Our experiments indicate that our proposed method can produce less perceptible perturbations yet with a similar or higher white-box fooling rates on various DNN texture classifiers and texture datasets as compared to existing universal attack techniques. We also demonstrate that our approach can improve the attack robustness against defended models as well as the cross-dataset transferability for texture recognition problems.
翻译:虽然深神经网络(DNN)被证明易受到自然图像分类问题的图像不可知性对抗攻击,但这种攻击对DNN基于质素的质地识别的影响仍有待探讨,作为我们工作的一部分,我们发现限制空间域的扰动标准$l_p$标准可能不是限制对质图像普遍对抗性扰动的恰当方法。基于人类感知受当地视觉频率特征影响的事实,我们提议一种频率调控的普遍攻击方法,以计算频率域的普遍扰动。我们的实验表明,我们提议的方法可以产生更不易察觉的扰动,但与现有的普遍攻击技术相比,对各种DNNT质质分类和质地数据集的白箱骗骗率类似或更高。我们还表明,我们的方法可以改进对被辨模型的攻击强度以及质质识别问题的交叉数据传输能力。