Sequential video understanding, as an emerging video understanding task, has driven lots of researchers' attention because of its goal-oriented nature. This paper studies weakly supervised sequential video understanding where the accurate time-stamp level text-video alignment is not provided. We solve this task by borrowing ideas from CLIP. Specifically, we use a transformer to aggregate frame-level features for video representation and use a pre-trained text encoder to encode the texts corresponding to each action and the whole video, respectively. To model the correspondence between text and video, we propose a multiple granularity loss, where the video-paragraph contrastive loss enforces matching between the whole video and the complete script, and a fine-grained frame-sentence contrastive loss enforces the matching between each action and its description. As the frame-sentence correspondence is not available, we propose to use the fact that video actions happen sequentially in the temporal domain to generate pseudo frame-sentence correspondence and supervise the network training with the pseudo labels. Extensive experiments on video sequence verification and text-to-video matching show that our method outperforms baselines by a large margin, which validates the effectiveness of our proposed approach. Code is available at https://github.com/svip-lab/WeakSVR
翻译:这篇论文研究了弱监督顺序视频理解,其中并没有提供精确的时间戳级别的文本-视频对齐。我们通过借鉴CLIP的思想来解决这个任务。具体来说,我们使用变压器来聚合视频表示的帧级特征,使用预训练文本编码器分别对每个动作和整个视频进行编码。为了建模文本和视频之间的对应关系,我们提出了多粒度损失,其中视频段落对比损失强制匹配整个视频和完整脚本之间的序列,而精细的帧-句子对比损失强制匹配每个动作和其描述之间的对应关系。由于帧-句子对应的关系不可用,我们提出利用视频动作在时间域中按顺序发生的事实来生成伪帧-句子对应关系,并使用伪标签监督网络训练。对视频序列验证和文本到视频匹配的广泛实验表明,我们的方法比基线方法表现更好,验证了我们提出方法的有效性。代码可在https://github.com/svip-lab/WeakSVR中获得。