Growing leakage and misuse of visual information raise security and privacy concerns, which promotes the development of information protection. Existing adversarial perturbations-based methods mainly focus on the de-identification against deep learning models. However, the inherent visual information of the data has not been well protected. In this work, inspired by the Type-I adversarial attack, we propose an adversarial visual information hiding method to protect the visual privacy of data. Specifically, the method generates obfuscating adversarial perturbations to obscure the visual information of the data. Meanwhile, it maintains the hidden objectives to be correctly predicted by models. In addition, our method does not modify the parameters of the applied model, which makes it flexible for different scenarios. Experimental results on the recognition and classification tasks demonstrate that the proposed method can effectively hide visual information and hardly affect the performances of models. The code is available in the supplementary material.
翻译:现有的对抗性扰动方法主要侧重于与深层学习模式的脱钩;然而,数据固有的视觉信息没有得到很好的保护;在这项工作中,在 " 一型对抗性攻击 " 的启发下,我们提议采用对抗性视觉信息隐藏方法来保护数据的视觉隐私;具体地说,该方法产生了令人困惑的对抗性扰动,以掩盖数据的视觉信息;同时,该方法维持了由模型正确预测的隐藏目标;此外,我们的方法没有修改应用模型的参数,使其对不同的情景具有灵活性;关于承认和分类任务的实验结果表明,拟议的方法能够有效地隐藏视觉信息,几乎不会影响模型的性能。该代码可以在补充材料中找到。