Deepfakes raised serious concerns on the authenticity of visual contents. Prior works revealed the possibility to disrupt deepfakes by adding adversarial perturbations to the source data, but we argue that the threat has not been eliminated yet. This paper presents MagDR, a mask-guided detection and reconstruction pipeline for defending deepfakes from adversarial attacks. MagDR starts with a detection module that defines a few criteria to judge the abnormality of the output of deepfakes, and then uses it to guide a learnable reconstruction procedure. Adaptive masks are extracted to capture the change in local facial regions. In experiments, MagDR defends three main tasks of deepfakes, and the learned reconstruction pipeline transfers across input data, showing promising performance in defending both black-box and white-box attacks.
翻译:深假引起了对视觉内容真实性的严重关切。 先前的作品揭示了通过在源数据中添加对抗干扰来破坏深假的可能性,但我们认为这一威胁尚未消除。 本文展示了MagDR, 这是一种保护深假免受对抗攻击的蒙面检测和重建管道。 MagDR首先使用一个检测模块,该模块界定了判断深假产出异常性的几个标准,然后用它来指导一个可学习的重建程序。 提取适应面罩以捕捉当地面部区域的变化。 在实验中,MagDR捍卫了深假的三大任务,以及经过输入数据的学习的重建管道传输,显示了在保护黑箱和白箱袭击方面的良好表现。