Negation is a common linguistic skill that allows human to express what we do NOT want. Naturally, one might expect video retrieval to support natural-language queries with negation, e.g., finding shots of kids sitting on the floor and not playing with the dog. However, the state-of-the-art deep learning based video retrieval models lack such ability, as they are typically trained on video description datasets such as MSR-VTT and VATEX that lack negated descriptions. Their retrieved results basically ignore the negator in the sample query, incorrectly returning videos showing kids playing with the dog. In this paper, we present the first study on learning to understand negation in video retrieval and make contributions as follows. First, by re-purposing two existing datasets, i.e. MSR-VTT and VATEX, we propose a new evaluation protocol for testing video retrieval with negation. Second, we propose a learning based method for training a negation-aware video retrieval model. The key idea is to first construct a soft negative caption for a specific training video by partially negating its original caption, and then compute a bidirectionally constrained loss on the triplet. This auxiliary loss is then weightedly added to a standard retrieval loss. Experiments on the re-purposed benchmarks show that re-training the CLIP (Contrastive Language-Image Pre-Training) model by the proposed method clearly improves its ability to handle queries with negation. In addition, its performance on the original benchmarks is also improved. Data and source code will be released.
翻译:当然,人们可能会期待视频检索,以便以否定的方式支持自然语言的询问,例如找到坐在地板上而不是与狗玩耍的孩子们的照片。然而,基于深深层次学习的视频检索模型缺乏这种能力,因为他们通常接受视频描述数据集的培训,如MSR-VTT和VATEX,这些数据集缺乏否定的描述。他们检索的结果基本上忽略了抽样查询中的摄像头,错误地返回显示孩子们玩狗的视频。在本文中,我们提出了关于学习如何理解视频检索中的否定之处并做出如下贡献的第一份研究报告。首先,通过重新设计两个现有的数据集,即MSR-VTT和VATEX,我们提出了一个新的评价程序,用于测试视频检索的视频描述数据集,例如MSR-VTT和VATEX,但缺乏否定性描述。第二,我们提出了一种基于学习的方法,用于培训“否定感知”视频检索模型模型模型。关键的想法是,首先通过部分否定性能说明其原始读取能力,然后做出以下贡献。首先,通过重新定位C-VIT标准检索,将它的标准损失基准重新展示。