The correlation between the vision and text is essential for video moment retrieval (VMR), however, existing methods heavily rely on separate pre-training feature extractors for visual and textual understanding. Without sufficient temporal boundary annotations, it is non-trivial to learn universal video-text alignments. In this work, we explore multi-modal correlations derived from large-scale image-text data to facilitate generalisable VMR. To address the limitations of image-text pre-training models on capturing the video changes, we propose a generic method, referred to as Visual-Dynamic Injection (VDI), to empower the model's understanding of video moments. Whilst existing VMR methods are focusing on building temporal-aware video features, being aware of the text descriptions about the temporal changes is also critical but originally overlooked in pre-training by matching static images with sentences. Therefore, we extract visual context and spatial dynamic information from video frames and explicitly enforce their alignments with the phrases describing video changes (e.g. verb). By doing so, the potentially relevant visual and motion patterns in videos are encoded in the corresponding text embeddings (injected) so to enable more accurate video-text alignments. We conduct extensive experiments on two VMR benchmark datasets (Charades-STA and ActivityNet-Captions) and achieve state-of-the-art performances. Especially, VDI yields notable advantages when being tested on the out-of-distribution splits where the testing samples involve novel scenes and vocabulary.
翻译:视觉与文本之间的相关性对于视频时刻检索(VMR)至关重要,然而现有方法严重依赖于视觉和文本理解的分离预训练特征提取器。在没有足够的时间边界注释的情况下,学习通用的视频-文本对齐是非常困难的。在这项工作中,我们探索了从大规模图像-文本数据中派生的多模态相关性,以促进可推广的VMR。为了解决仅能捕捉视频变化的图像-文本预训练模型的限制,我们提出了一种通用的方法,称为视觉动态注入(VDI),以增强模型对视频时刻的理解。虽然现有的VMR方法着重于构建时间感知的视频特征,但了解有关时间变化的文本描述(例如动词)也是至关重要的,但最初在预训练中被忽略了。因此,我们从视频帧中提取视觉上下文和空间动态信息,并明确强制它们与描述视频变化的短语之间的对齐关系。通过这样做,视频中的潜在相关视觉和动态模式被编码在相应的文本嵌入中(被注入),以便实现更准确的视频-文本对齐。我们在两个VMR基准数据集(Charades-STA和ActivityNet-Captions)上进行了广泛的实验,并取得了最佳性能。特别是,VDI在测试样本涉及新型场景和词汇的分布外分裂测试时表现出显着的优势。