Temporal grounding aims to localize a video moment which is semantically aligned with a given natural language query. Existing methods typically apply a detection or regression pipeline on the fused representation with the research focus on designing complicated prediction heads or fusion strategies. Instead, from a perspective on temporal grounding as a metric-learning problem, we present a Mutual Matching Network (MMN), to directly model the similarity between language queries and video moments in a joint embedding space. This new metric-learning framework enables fully exploiting negative samples from two new aspects: constructing negative cross-modal pairs in a mutual matching scheme and mining negative pairs across different videos. These new negative samples could enhance the joint representation learning of two modalities via cross-modal mutual matching to maximize their mutual information. Experiments show that our MMN achieves highly competitive performance compared with the state-of-the-art methods on four video grounding benchmarks. Based on MMN, we present a winner solution for the HC-STVG challenge of the 3rd PIC workshop. This suggests that metric learning is still a promising method for temporal grounding via capturing the essential cross-modal correlation in a joint embedding space. Code is available at https://github.com/MCG-NJU/MMN.
翻译:现有方法通常在引信代表器上采用探测或回归管道,研究重点是设计复杂的预测头或聚合战略。相反,从时间基础的视角看,我们推出一个相互匹配网络(MMN),直接在联合嵌入空间中模拟语言查询和视频片段之间的相似性。这个新的标准化学习框架能够充分利用两个新方面的负面样本:在相互匹配的方案中建造负跨式对子,在不同视频中挖掘负式对子。这些新的负面样本可以通过跨式相互匹配加强两种模式的共同代表学习,以最大限度地扩大彼此信息。实验显示,我们的MMN在四个视频基点基准上取得了与最先进的方法相比的高度竞争性能。在MMNM的基础上,我们为第三次PIC讲习班的HC-STVG挑战提供了一个赢家解决方案。这显示,通过获取必要的跨式相互匹配和NMMMM/MM在联合空间代码中进行实时地基时间基时仍是一种很有希望的方法。