Deep learning models are found to be vulnerable to adversarial examples, as wrong predictions can be caused by small perturbation in input for deep learning models. Most of the existing works of adversarial image generation try to achieve attacks for most models, while few of them make efforts on guaranteeing the perceptual quality of the adversarial examples. High quality adversarial examples matter for many applications, especially for the privacy preserving. In this work, we develop a framework based on the Minimum Noticeable Difference (MND) concept to generate adversarial privacy preserving images that have minimum perceptual difference from the clean ones but are able to attack deep learning models. To achieve this, an adversarial loss is firstly proposed to make the deep learning models attacked by the adversarial images successfully. Then, a perceptual quality-preserving loss is developed by taking the magnitude of perturbation and perturbation-caused structural and gradient changes into account, which aims to preserve high perceptual quality for adversarial image generation. To the best of our knowledge, this is the first work on exploring quality-preserving adversarial image generation based on the MND concept for privacy preserving. To evaluate its performance in terms of perceptual quality, the deep models on image classification and face recognition are tested with the proposed method and several anchor methods in this work. Extensive experimental results demonstrate that the proposed MND framework is capable of generating adversarial images with remarkably improved performance metrics (e.g., PSNR, SSIM, and MOS) than that generated with the anchor methods.
翻译:深层次的学习模式被认为易受对抗性实例的影响,因为错误的预测可能是由于深层次学习模式投入的微小扰动造成的,现有的对抗性图像生成的现有作品大多试图使大多数模式受到攻击,而其中很少有人努力保证对抗性实例的认知质量。高质量的对抗性实例在许多应用中很重要,特别是隐私保护方面。在这项工作中,我们根据最低限度可感知差异概念(MND)概念制定了一个框架,以产生对抗性隐私保护图像,这些图像与清洁图像的感知差异最小,但能够攻击深层次学习模式。为了实现这一点,首先提出对抗性损失是为了成功地使被对抗性图像攻击的深层次学习模式成为成功。然后,通过考虑到扰动和扰动导致的结构和梯度变化的程度,发展一种感性质量保护损失的观念。 我们的知识中,我们首先研究基于MND概念的维护对抗性图像生成质量,首先提出让MND概念概念的模型成功进行深层次的模拟,然后用精确的模型来评估其业绩,然后用精确的模型来测试这一模型的模型的升级的模型。