The rapid development of facial manipulation techniques has aroused public concerns in recent years. Following the success of deep learning, existing methods always formulate DeepFake video detection as a binary classification problem and develop frame-based and video-based solutions. However, little attention has been paid to capturing the spatial-temporal inconsistency in forged videos. To address this issue, we term this task as a Spatial-Temporal Inconsistency Learning (STIL) process and instantiate it into a novel STIL block, which consists of a Spatial Inconsistency Module (SIM), a Temporal Inconsistency Module (TIM), and an Information Supplement Module (ISM). Specifically, we present a novel temporal modeling paradigm in TIM by exploiting the temporal difference over adjacent frames along with both horizontal and vertical directions. And the ISM simultaneously utilizes the spatial information from SIM and temporal information from TIM to establish a more comprehensive spatial-temporal representation. Moreover, our STIL block is flexible and could be plugged into existing 2D CNNs. Extensive experiments and visualizations are presented to demonstrate the effectiveness of our method against the state-of-the-art competitors.
翻译:近些年来,面部操纵技术的迅速发展引起了公众的关注。在深层学习成功之后,现有方法总是将深假视频检测作为一种二进制分类问题,并开发基于框架和视频的解决方案。然而,人们很少注意捕捉伪造视频的空间时空不一致性。为了解决这一问题,我们将此任务称为空间-时际不一致学习(STIL)进程,并将其立即纳入一个新的STIL块,由空间不一致模块(SIM)、时空不一致模块(TIM)和信息补充模块(ISM)组成。具体地说,我们通过利用横向和纵向两个方向的相邻框架之间的时间差异,在TIM上展示一个新的时间模型模式。IM同时利用SIM和TIM的时空信息建立更全面的空间时空代表。此外,我们的STIL块是灵活的,可以插入现有的2DCNN。我们的广泛实验和可视化展示了我们对付州级竞争者的方法的有效性。