Intra-frame inconsistency has been proved to be effective for the generalization of face forgery detection. However, learning to focus on these inconsistency requires extra pixel-level forged location annotations. Acquiring such annotations is non-trivial. Some existing methods generate large-scale synthesized data with location annotations, which is only composed of real images and cannot capture the properties of forgery regions. Others generate forgery location labels by subtracting paired real and fake images, yet such paired data is difficult to collected and the generated label is usually discontinuous. To overcome these limitations, we propose a novel Unsupervised Inconsistency-Aware method based on Vision Transformer, called UIA-ViT, which only makes use of video-level labels and can learn inconsistency-aware feature without pixel-level annotations. Due to the self-attention mechanism, the attention map among patch embeddings naturally represents the consistency relation, making the vision Transformer suitable for the consistency representation learning. Based on vision Transformer, we propose two key components: Unsupervised Patch Consistency Learning (UPCL) and Progressive Consistency Weighted Assemble (PCWA). UPCL is designed for learning the consistency-related representation with progressive optimized pseudo annotations. PCWA enhances the final classification embedding with previous patch embeddings optimized by UPCL to further improve the detection performance. Extensive experiments demonstrate the effectiveness of the proposed method.
翻译:然而,为了克服这些限制,我们建议了两个关键组成部分:以视野变异器(UIA-VIT)为基础,采用新颖的无监督不一致性软件(UIA-VIT),该软件仅使用视频级别标签,并可在没有像素级别说明的情况下学习不一致性的特性。由于自留机制,补丁嵌入的注意图自然代表了一致性关系,使愿景变异器适合学习一致性表述。根据愿景变异器,我们提议了两个关键组成部分:不监管的补齐不一致性学习(UIA-VIT)和渐进式CAFIT。