Benefiting from the development of generative adversarial networks (GAN), facial manipulation has achieved significant progress in both academia and industry recently. It inspires an increasing number of entertainment applications but also incurs severe threats to individual privacy and even political security meanwhile. To mitigate such risks, many countermeasures have been proposed. However, the great majority methods are designed in a passive manner, which is to detect whether the facial images or videos are tampered after their wide propagation. These detection-based methods have a fatal limitation, that is, they only work for ex-post forensics but can not prevent the engendering of malicious behavior. To address the limitation, in this paper, we propose a novel framework of initiative defense to degrade the performance of facial manipulation models controlled by malicious users. The basic idea is to actively inject imperceptible venom into target facial data before manipulation. To this end, we first imitate the target manipulation model with a surrogate model, and then devise a poison perturbation generator to obtain the desired venom. An alternating training strategy are further leveraged to train both the surrogate model and the perturbation generator. Two typical facial manipulation tasks: face attribute editing and face reenactment, are considered in our initiative defense framework. Extensive experiments demonstrate the effectiveness and robustness of our framework in different settings. Finally, we hope this work can shed some light on initiative countermeasures against more adversarial scenarios.
翻译:通过发展基因对抗网络(GAN),面部操纵最近在学术界和行业都取得了显著进展,它激发了越来越多的娱乐应用,但也对个人隐私甚至政治安全造成了严重威胁。为了减轻这种风险,提出了许多对策。然而,绝大多数方法的设计都是被动的,即检测面部图像或视频是否在广泛传播后被篡改。这些基于检测的方法具有致命的局限性,即它们只为事后法医工作,但不能防止恶意行为的产生。为了解决本文件中的限制,我们提出了一个新的倡议防御框架,以降低恶意使用者所控制的面部操纵模型的性能。基本的想法是在操作之前积极将不可察觉的毒液注入目标的面部数据中。为此,我们首先模仿目标操纵模型,然后设计一种毒气渗透生成器,以获得理想的毒液。一个交替式培训战略被进一步用于培训顶替模型和深思熟虑的脑部生成器。两种典型的面部操纵框架是:在操作之前,我们最典型的面部操纵框架;在设计过程中,我们更典型的面部操纵框架,我们展示了更强烈的防御框架。