We propose a new method to detect deepfake images using the cue of the source feature inconsistency within the forged images. It is based on the hypothesis that images' distinct source features can be preserved and extracted after going through state-of-the-art deepfake generation processes. We introduce a novel representation learning approach, called pair-wise self-consistency learning (PCL), for training ConvNets to extract these source features and detect deepfake images. It is accompanied by a new image synthesis approach, called inconsistency image generator (I2G), to provide richly annotated training data for PCL. Experimental results on seven popular datasets show that our models improve averaged AUC over the state of the art from 96.45% to 98.05% in the in-dataset evaluation and from 86.03% to 92.18% in the cross-dataset evaluation.
翻译:我们建议采用一种新的方法,利用来源特征的提示,在伪造图像中检测深假图像,其特点不一致;其依据的假设是,在经过最先进的深假生成过程后,图像的独特来源特征可以保存和提取;我们引入了一种新的代表性学习方法,称为双向自协调学习(PCL),用于培训ConvNets,以提取这些来源特征并探测深假图像;同时,还采用了新的图像合成方法,称为不一致图像生成器(I2G),为PCL提供大量附加说明的培训数据。 七个流行数据集的实验结果表明,我们的模型在内部数据集评估中,将AUC的平均比例从96.45%提高到98.05%,在交叉数据集评估中从86.03%提高到92.18%。