Video Referring Expression Comprehension (REC) aims to localize a target object in video frames referred by the natural language expression. Recently, the Transformerbased methods have greatly boosted the performance limit. However, we argue that the current query design is suboptima and suffers from two drawbacks: 1) the slow training convergence process; 2) the lack of fine-grained alignment. To alleviate this, we aim to couple the pure learnable queries with the content information. Specifically, we set up a fixed number of learnable bounding boxes across the frame and the aligned region features are employed to provide fruitful clues. Besides, we explicitly link certain phrases in the sentence to the semantically relevant visual areas. To this end, we introduce two new datasets (i.e., VID-Entity and VidSTG-Entity) by augmenting the VIDSentence and VidSTG datasets with the explicitly referred words in the whole sentence, respectively. Benefiting from this, we conduct the fine-grained cross-modal alignment at the region-phrase level, which ensures more detailed feature representations. Incorporating these two designs, our proposed model (dubbed as ContFormer) achieves the state-of-the-art performance on widely benchmarked datasets. For example on VID-Entity dataset, compared to the previous SOTA, ContFormer achieves 8.75% absolute improvement on Accu.@0.6.
翻译:视频表达式理解( REC) 旨在将自然语言表达式引用的视频框中的目标对象本地化。 最近, 以变换器为基础的方法大大提升了性能限制。 然而, 我们辩称, 当前查询设计是次优化的, 存在两个缺点:(1) 培训趋同进程缓慢;(2) 缺乏细微的匹配。 为了缓解这一点, 我们的目标是将纯可学习的查询与内容信息相匹配。 具体地说, 我们在整个句子中设置了一个固定数量的可学习的捆绑框, 并使用对齐区域特性来提供丰硕的线索。 此外, 我们明确将句子中的某些词句子与语义相关的视觉区域区域连接起来。 为此, 我们引入了两个新的数据集( 即, VID- Enity 和 VidSTG- Entity ), 通过增加 VIDSent和 VidSTG 数据集, 与整个句子中明确引用的词句子。 受益的是, 我们在区域句子级别上进行精确的跨模式调整, 我们进行精确的跨模式调整, 以确保更详细的字符缩图示 。 在 VEVAL- 上, 格式上, 实现两个数据库的模型上, 实现两个数据库的缩略图。