Deepfakes pose severe threats of visual misinformation to our society. One representative deepfake application is face manipulation that modifies a victim's facial attributes in an image, e.g., changing her age or hair color. The state-of-the-art face manipulation techniques rely on Generative Adversarial Networks (GANs). In this paper, we propose the first defense system, namely UnGANable, against GAN-inversion-based face manipulation. In specific, UnGANable focuses on defending GAN inversion, an essential step for face manipulation. Its core technique is to search for alternative images (called cloaked images) around the original images (called target images) in image space. When posted online, these cloaked images can jeopardize the GAN inversion process. We consider two state-of-the-art inversion techniques including optimization-based inversion and hybrid inversion, and design five different defenses under five scenarios depending on the defender's background knowledge. Extensive experiments on four popular GAN models trained on two benchmark face datasets show that UnGANable achieves remarkable effectiveness and utility performance, and outperforms multiple baseline methods. We further investigate four adaptive adversaries to bypass UnGANable and show that some of them are slightly effective.
翻译:深海假象对我们的社会构成严重的视觉错误威胁。 一位有代表性的深假应用软件代表着对一个图像的面部特征进行操纵,改变其年龄或发色等图像的面部特征。 最先进的面部操纵技术依赖于基因反转网络( GANs )。 在本文中, 我们提议了第一个防御系统, 即UnGANable, 对抗GAN- Inversion的面部操纵。 具体地说, UnGANable 侧重于保护GAN 的反向, 这是脸部操纵的一个基本步骤。 它的核心技术是寻找图像空间原始图像( 即所谓的目标图像) 周围的替代图像( 隐形图像 ) 。 当这些隐形图像被张贴在网上时, 这些隐形操纵技术会危及GAN 的反向进程。 我们考虑两种状态的反向技术, 包括优化的反向和混合反向转换, 设计五种不同的防御方案, 取决于捍卫者的背景知识。 在两种基本数据设置上培训的四种流行的GAN模型上进行广泛的实验, 表明, 不可古的GAN 取得了显著的效能和实用的反向。