Although current deep learning-based face forgery detectors achieve impressive performance in constrained scenarios, they are vulnerable to samples created by unseen manipulation methods. Some recent works show improvements in generalisation but rely on cues that are easily corrupted by common post-processing operations such as compression. In this paper, we propose LipForensics, a detection approach capable of both generalising to novel manipulations and withstanding various distortions. LipForensics targets high-level semantic irregularities in mouth movements, which are common in many generated videos. It consists in first pretraining a spatio-temporal network to perform visual speech recognition (lipreading), thus learning rich internal representations related to natural mouth motion. A temporal network is subsequently finetuned on fixed mouth embeddings of real and forged data in order to detect fake videos based on mouth movements without overfitting to low-level, manipulation-specific artefacts. Extensive experiments show that this simple approach significantly surpasses the state-of-the-art in terms of generalisation to unseen manipulations and robustness to perturbations, as well as shed light on the factors responsible for its performance.
翻译:虽然目前深层的以学习为基础的伪造面部探测器在限制情景下取得了令人印象深刻的成绩,但它们很容易受到由无形操纵方法产生的样本的影响。最近的一些作品显示在一般化方面有所改进,但依赖通常的加工后操作(如压缩)很容易腐蚀的提示。在本文件中,我们建议采用“利普法森西克”这一探测方法,既能概括一些新的操纵,又能消除各种扭曲。利普法森西以口腔运动中高层次的语义异常为对象,这在许多制作的视频中是常见的。它包括首先训练一个空间时空网络,以进行视觉语音识别(翻转),从而学习与自然口部运动有关的丰富的内部表现。随后,一个时间网络对真实和伪造数据的固定口腔嵌入进行微调,以探测基于口腔运动的假视频,而不会过度适应低层次的操纵专用工艺品。广泛的实验表明,这一简单方法大大超过一般化为无形操纵和扰动的状态,并且暴露了造成其表现的因素。