Video grounding aims to locate a moment of interest matching the given query sentence from an untrimmed video. Previous works ignore the \emph{sparsity dilemma} in video annotations, which fails to provide the context information between potential events and query sentences in the dataset. In this paper, we contend that exploiting easily available captions which describe general actions \ie, prompt captions (PC) defined in our paper, will significantly boost the performance. To this end, we propose a Prompt Caption Network (PCNet) for video grounding. Specifically, we first introduce dense video captioning to generate dense captions and then obtain prompt captions by Non-Prompt Caption Suppression (NPCS). To capture the potential information in prompt captions, we propose Caption Guided Attention (CGA) project the semantic relations between prompt captions and query sentences into temporal space and fuse them into visual representations. Considering the gap between prompt captions and ground truth, we propose Asymmetric Cross-modal Contrastive Learning (ACCL) for constructing more negative pairs to maximize cross-modal mutual information. Without bells and whistles, extensive experiments on three public datasets (\ie, ActivityNet Captions, TACoS and ActivityNet-CG) demonstrate that our method significantly outperforms state-of-the-art methods.
翻译:摘要:视频定位旨在从一个未剪辑的视频中定位与给定查询句子匹配的感兴趣时刻。以往的工作忽略了视频注释中的“稀疏困境”,这导致数据集中的潜在事件和查询句子之间的上下文信息无法提供。在本文中,我们认为利用易于获取的描述一般动作的字幕,即我们定义的提示字幕(PC),将显著提高性能。为此,我们提出了一个面向视频定位的提示字幕网络(PCNet)。具体而言,我们首先引入了密集视频字幕生成密集字幕,然后通过非提示字幕抑制(NPCS)获得提示字幕。为了捕获提示字幕中的潜在信息,我们提出了字幕引导注意力(CGA),将提示字幕和查询句子之间的语义关系投影到时间空间中,并将其融入视觉表示。考虑到提示字幕和“真相”的差距,我们提出了非对称跨模态对比学习(ACCL),以构建更多的负样本对,从而最大化跨模态互信息。在没有过多修饰的情况下,在三个公共数据集(即ActivityNet Captions,TACoS和ActivityNet-CG)上进行了广泛的实验,证明了我们的方法显著优于现有方法。