Deep Neural Networks (DNNs) are vulnerable to adversarial examples which would inveigle neural networks to make prediction errors with small perturbations on the input images. Researchers have been devoted to promoting the research on the universal adversarial perturbations (UAPs) which are gradient-free and have little prior knowledge on data distributions. Procedural adversarial noise attack is a data-free universal perturbation generation method. In this paper, we propose two universal adversarial perturbation (UAP) generation methods based on procedural noise functions: Simplex noise and Worley noise. In our framework, the shading which disturbs visual classification is generated with rendering technology. Without changing the semantic representations, the adversarial examples generated via our methods show superior performance on the attack.
翻译:深神经网络(DNNS)很容易受到对抗性的例子的影响,这些例子会使神经网络产生预测错误,对输入图像进行小扰动;研究人员致力于促进无梯度和对数据分布缺乏事先知识的通用对抗性扰动研究;程序性对抗性噪音攻击是一种没有数据的普遍扰动生成方法;在本文件中,我们提议两种基于程序噪音功能的普遍对抗性扰动(UAP)生成方法:简单噪音和Worley噪音;在我们的框架里,影响视觉分类的阴影是用技术生成的;在不改变语义表达的情况下,通过我们的方法生成的对抗性例子显示了攻击的优异性表现。