Adversarial examples are some special input that can perturb the output of a deep neural network, in order to make produce intentional errors in the learning algorithms in the production environment. Most of the present methods for generating adversarial examples require gradient information. Even universal perturbations that are not relevant to the generative model rely to some extent on gradient information. Procedural noise adversarial examples is a new way of adversarial example generation, which uses computer graphics noise to generate universal adversarial perturbations quickly while not relying on gradient information. Combined with the defensive idea of adversarial training, we use Perlin noise to train the neural network to obtain a model that can defend against procedural noise adversarial examples. In combination with the use of model fine-tuning methods based on pre-trained models, we obtain faster training as well as higher accuracy. Our study shows that procedural noise adversarial examples are defensible, but why procedural noise can generate adversarial examples and how to defend against other kinds of procedural noise adversarial examples that may emerge in the future remain to be investigated.
翻译:反对立实例是某些特殊投入,可以干扰深神经网络的输出,从而在生产环境中的学习算法中造成故意错误。目前大多数生成对抗性实例的方法都需要梯度信息。即使与基因模型无关的普遍扰动在某种程度上也依赖于梯度信息。程序噪音对抗性实例是对抗性实例生成的新方法,它使用计算机图形噪音来迅速生成通用对抗性扰动,而不必依赖梯度信息。与防御性培训的防御性理念相结合,我们利用 Perlin噪音来培训神经网络,以获得一种能够抵御程序噪音对抗性对抗性实例的模式。结合使用基于预先培训模型的模型微调方法,我们获得更快的培训,并获得更高的准确性。我们的研究显示,程序噪音对抗性实例是可以辩驳的,但为什么程序噪音可以产生对抗性实例,以及如何保护今后可能出现的其他类型的程序噪音对抗性对抗性实例仍有待调查。