This paper tackles a recently proposed Video Corpus Moment Retrieval task. This task is essential because advanced video retrieval applications should enable users to retrieve a precise moment from a large video corpus. We propose a novel CONtextual QUery-awarE Ranking~(CONQUER) model for effective moment localization and ranking. CONQUER explores query context for multi-modal fusion and representation learning in two different steps. The first step derives fusion weights for the adaptive combination of multi-modal video content. The second step performs bi-directional attention to tightly couple video and query as a single joint representation for moment localization. As query context is fully engaged in video representation learning, from feature fusion to transformation, the resulting feature is user-centered and has a larger capacity in capturing multi-modal signals specific to query. We conduct studies on two datasets, TVR for closed-world TV episodes and DiDeMo for open-world user-generated videos, to investigate the potential advantages of fusing video and query online as a joint representation for moment retrieval.
翻译:本文处理最近提出的视频Corpus Moment Retreal 任务。 此项任务至关重要, 因为高级视频检索应用程序应该让用户能够从大型视频中获取一个准确的瞬间。 我们为有效的时间本地化和排名建议了一个新型的Cextual Query- awarE Ranking~( COONQUER) 模型。 CONAQUER 探索多模式聚合和演示学习的两个不同步骤的查询背景。 第一步为多模式视频内容的适应性组合提供了聚合重量。 第二步是双向关注紧凑的视频和查询, 作为瞬间本地化的单一联合代表。 由于查询环境完全参与视频演示学习, 从特性聚合到转换, 由此产生的功能以用户为中心, 并具有更大的能力捕捉用于查询的多模式信号。 我们对两个数据集进行了研究, TVR 用于封闭世界电视片段, DiDeMo 用于开放世界用户生成的视频, 以调查将视频和查询作为暂时检索的联合代表的潜在优势 。