Video representation learning has been successful in video-text pre-training for zero-shot transfer, where each sentence is trained to be close to the paired video clips in a common feature space. For long videos, given a paragraph of description where the sentences describe different segments of the video, by matching all sentence-clip pairs, the paragraph and the full video are aligned implicitly. However, such unit-level comparison may ignore global temporal context, which inevitably limits the generalization ability. In this paper, we propose a contrastive learning framework TempCLR to compare the full video and the paragraph explicitly. As the video/paragraph is formulated as a sequence of clips/sentences, under the constraint of their temporal order, we use dynamic time warping to compute the minimum cumulative cost over sentence-clip pairs as the sequence-level distance. To explore the temporal dynamics, we break the consistency of temporal succession by shuffling video clips w.r.t. temporal granularity. Then, we obtain the representations for clips/sentences, which perceive the temporal information and thus facilitate the sequence alignment. In addition to pre-training on the video and paragraph, our approach can also generalize on the matching between video instances. We evaluate our approach on video retrieval, action step localization, and few-shot action recognition, and achieve consistent performance gain over all three tasks. Detailed ablation studies are provided to justify the approach design.
翻译:视频表示学习已经在视频文本预训练中实现了零样本转移的成功,在该方法中,每个句子都被训练到在共同的特征空间中与配对的视频剪辑靠近。对于长视频而言,给定一个描述段落,其中的句子描述视频的不同部分,通过匹配所有的句子-剪辑对,段落与完整的视频被隐式地对齐。然而,这种单元级别的比较可能会忽略全局时间上下文,从而不可避免地限制了泛化能力。在本文中,我们提出了一种对比学习框架 TempCLR,以显式地比较整个视频和段落。由于视频/段落被构造为剪辑/句子序列,在考虑它们的时间顺序约束下,我们使用动态时间扭曲计算句子-剪辑对的最小累积成本作为序列级别的距离。为了探索时间动态性,我们通过对时间粒度进行混洗来打破时间步骤的一致性。然后,我们获得了剪辑/句子的表示,这些表示感知时间信息,因此有助于序列对齐。除了在视频和段落上进行预训练外,我们的方法还可以推广到视频实例之间的匹配问题。我们在视频检索、动作步骤定位和小样本动作识别上评估了我们的方法,并在这三个任务上取得了一致的性能增益。详细的消融研究论证了我们的方法设计。