Temporal Video Grounding (TVG) aims to localize time segments in an untrimmed video according to natural language queries. In this work, we present a new paradigm named Explore-and-Match for TVG that seamlessly unifies two streams of TVG methods: proposal-free and proposal-based; the former explores the search space to find segments directly, and the latter matches the predefined proposals with ground truths. To achieve this goal, we view TVG as a set prediction problem and design an end-to-end trainable Language Video Transformer (LVTR) that utilizes the architectural strengths of rich contextualization and parallel decoding for set prediction. The overall training schedule is balanced by two key losses that play different roles, namely temporal localization loss and set guidance loss. These two losses allow each proposal to regress the target segment and identify the target query. More specifically, LVTR first explores the search space to diversify the initial proposals, and then matches the proposals to the corresponding targets to align them in a fine-grained manner. The Explore-and-Match scheme successfully combines the strengths of two complementary methods without encoding prior knowledge (e.g., non-maximum suppression) into the TVG pipeline. As a result, LVTR sets new state-of-the-art results on two TVG benchmarks (ActivityCaptions and Charades-STA) with double the inference speed. Codes are available at https://github.com/sangminwoo/Explore-and-Match.
翻译:时间视频定位( TVG ) 旨在根据自然语言询问,将时间段定位在未剪接的视频中。 在这项工作中,我们展示了一个新的范式,名为《TVG探索与匹配》,无缝地统一了TVG方法的两个流:无建议和基于建议的方法;前者探索搜索空间直接查找部分,而后者与预先定义的建议匹配了地面真相。为实现这一目标,我们把TVG视为一个设定的预测问题,并设计了一个端到端的可培训语言视频变换器(LVTR),利用丰富背景化和平行解码的建筑优势来设定预测。总体培训时间表由两个关键损失来平衡,这两个损失作用不同,即时间本地化损失和设定指导损失。这两个损失使得每个建议都能够重新侵蚀目标段并确定目标查询。 更具体地说, LVTR 网站首先探索搜索空间,使初始建议多样化,然后将建议与相应的目标匹配,以精细的方式调整它们。 探索和匹配方案计划成功地将两种补充方法的优势与前阶段的Starg- RVS- Restal Bestal结果整合。