In this paper, we introduce 3D-CSL, a compact pipeline for Near-Duplicate Video Retrieval (NDVR), and explore a novel self-supervised learning strategy for video similarity learning. Most previous methods only extract video spatial features from frames separately and then design kinds of complex mechanisms to learn the temporal correlations among frame features. However, parts of spatiotemporal dependencies have already been lost. To address this, our 3D-CSL extracts global spatiotemporal dependencies in videos end-to-end with a 3D transformer and find a good balance between efficiency and effectiveness by matching on clip-level. Furthermore, we propose a two-stage self-supervised similarity learning strategy to optimize the entire network. Firstly, we propose PredMAE to pretrain the 3D transformer with video prediction task; Secondly, ShotMix, a novel video-specific augmentation, and FCS loss, a novel triplet loss, are proposed further promote the similarity learning results. The experiments on FIVR-200K and CC_WEB_VIDEO demonstrate the superiority and reliability of our method, which achieves the state-of-the-art performance on clip-level NDVR.
翻译:在本文中,我们引入了3D-CSL,这是一个用于近复制视频检索的紧凑管道,并探索了一种全新的自我监督学习战略,用于视频相似性学习。多数先前的方法只是分别从框架中提取视频空间特征,然后设计各种复杂的机制,以了解框架特征之间的时间相关性。然而,一些零星时的依赖性已经丢失了。为了解决这个问题,我们的3D-CSL提取了一个3D变压器的视频端端到端的视频中全球空间依赖性,并在效率和效果之间找到一种良好的平衡。此外,我们提出了一个两阶段自我监督的类似性学习战略,以优化整个网络。首先,我们建议PredMAE将3D变压器预先配置成视频预测任务;第二,ShotMix,一个新型的视频特定增强性以及FCS损失,这是一个新的三重损失,将进一步促进类似性学习结果。 FIVR-200K和CC_WE-R_VIDE-NDO的实验,展示了我们州的优势和可靠性。