DeepFake is becoming a real risk to society and brings potential threats to both individual privacy and political security due to the DeepFaked multimedia are realistic and convincing. However, the popular DeepFake passive detection is an ex-post forensics countermeasure and failed in blocking the disinformation spreading in advance. To address this limitation, researchers study the proactive defense techniques by adding adversarial noises into the source data to disrupt the DeepFake manipulation. However, the existing studies on proactive DeepFake defense via injecting adversarial noises are not robust, which could be easily bypassed by employing simple image reconstruction revealed in a recent study MagDR. In this paper, we investigate the vulnerability of the existing forgery techniques and propose a novel \emph{anti-forgery} technique that helps users protect the shared facial images from attackers who are capable of applying the popular forgery techniques. Our proposed method generates perceptual-aware perturbations in an incessant manner which is vastly different from the prior studies by adding adversarial noises that is sparse. Experimental results reveal that our perceptual-aware perturbations are robust to diverse image transformations, especially the competitive evasion technique, MagDR via image reconstruction. Our findings potentially open up a new research direction towards thorough understanding and investigation of perceptual-aware adversarial attack for protecting facial images against DeepFakes in a proactive and robust manner. We open-source our tool to foster future research. Code is available at https://github.com/AbstractTeen/AntiForgery/.
翻译:深面组织正在成为社会的真正风险,并且由于深面组织多媒体的“深面组织”而给个人隐私和政治安全带来潜在的威胁。但是,流行的“深面组织”被动检测是一种事后法证反措施,未能阻止不实信息提前传播。为了应对这一限制,研究人员研究积极主动的防御技术,在源数据中添加对抗性噪音,以打乱“深面组织”的操纵。然而,目前关于通过注射对抗性对抗性噪音进行预防性的“深面组织”防御的研究并不有力,采用最新《麦加德罗研究报告》中揭示的简单图像重建很容易绕过这些研究。在本文件中,我们调查现有伪造技术的脆弱性,并提出新的“深面组织”技术,帮助用户保护共同的面部形象不受能够应用“普面伪造技术”攻击者的影响。我们提出的方法以与先前研究截然不同的方式产生感知觉觉觉觉,增加公开性对抗性的噪音是稀有的。实验结果显示,我们的认知性认知性图像重组是多样化的。在进行未来彻底的图像改革时,尤其是竞争性的搜索技术,通过研究,保护我们未来的研究方向。