Deepfakes, which employ GAN to produce highly realistic facial modification, are widely regarded as the prevailing method. Traditional CNN have been able to identify bogus media, but they struggle to perform well on different datasets and are vulnerable to adversarial attacks due to their lack of robustness. Vision transformers have demonstrated potential in the realm of image classification problems, but they require enough training data. Motivated by these limitations, this publication introduces Tex-ViT (Texture-Vision Transformer), which enhances CNN features by combining ResNet with a vision transformer. The model combines traditional ResNet features with a texture module that operates in parallel on sections of ResNet before each down-sampling operation. The texture module then serves as an input to the dual branch of the cross-attention vision transformer. It specifically focuses on improving the global texture module, which extracts feature map correlation. Empirical analysis reveals that fake images exhibit smooth textures that do not remain consistent over long distances in manipulations. Experiments were performed on different categories of FF++, such as DF, f2f, FS, and NT, together with other types of GAN datasets in cross-domain scenarios. Furthermore, experiments also conducted on FF++, DFDCPreview, and Celeb-DF dataset underwent several post-processing situations, such as blurring, compression, and noise. The model surpassed the most advanced models in terms of generalization, achieving a 98% accuracy in cross-domain scenarios. This demonstrates its ability to learn the shared distinguishing textural characteristics in the manipulated samples. These experiments provide evidence that the proposed model is capable of being applied to various situations and is resistant to many post-processing procedures.
翻译:深度伪造技术采用生成对抗网络(GAN)生成高度逼真的人脸篡改内容,目前被广泛视为主流方法。传统卷积神经网络(CNN)虽能识别虚假媒体,但由于缺乏鲁棒性,在不同数据集上表现欠佳且易受对抗攻击影响。视觉Transformer在图像分类领域已展现出潜力,但其训练需要充足的数据。基于这些局限性,本文提出Tex-ViT(纹理视觉Transformer),通过将ResNet与视觉Transformer相结合来增强CNN特征。该模型将传统ResNet特征与纹理模块并行结合,纹理模块在每次下采样操作前对ResNet的分区进行处理。纹理模块的输出随后作为双分支交叉注意力视觉Transformer的输入。该设计重点改进了提取特征图相关性的全局纹理模块。实证分析表明,伪造图像在篡改区域会呈现长距离不一致的平滑纹理特征。实验在FF++的不同类别(如DF、f2f、FS、NT)及其他类型GAN数据集的跨域场景中进行。此外,还在FF++、DFDCPreview和Celeb-DF数据集上进行了多种后处理场景(如模糊、压缩、噪声)的测试。该模型在泛化能力上超越了现有先进模型,在跨域场景中达到98%的准确率,证明其能有效学习篡改样本中共享的判别性纹理特征。实验结果表明,所提模型能适应多种应用场景,并对多种后处理操作具有鲁棒性。