Moment retrieval in videos is a challenging task that aims to retrieve the most relevant video moment in an untrimmed video given a sentence description. Previous methods tend to perform self-modal learning and cross-modal interaction in a coarse manner, which neglect fine-grained clues contained in video content, query context, and their alignment. To this end, we propose a novel Multi-Granularity Perception Network (MGPN) that perceives intra-modality and inter-modality information at a multi-granularity level. Specifically, we formulate moment retrieval as a multi-choice reading comprehension task and integrate human reading strategies into our framework. A coarse-grained feature encoder and a co-attention mechanism are utilized to obtain a preliminary perception of intra-modality and inter-modality information. Then a fine-grained feature encoder and a conditioned interaction module are introduced to enhance the initial perception inspired by how humans address reading comprehension problems. Moreover, to alleviate the huge computation burden of some existing methods, we further design an efficient choice comparison module and reduce the hidden size with imperceptible quality loss. Extensive experiments on Charades-STA, TACoS, and ActivityNet Captions datasets demonstrate that our solution outperforms existing state-of-the-art methods.
翻译:视频中的移动检索是一项具有挑战性的任务,目的是在一段未剪辑的视频中检索最相关的视频时刻,并配有句子描述。 以往的方法倾向于以粗糙的方式进行自我现代学习和跨模式互动,忽视视频内容、查询背景及其匹配中所包含的细微线索。 为此,我们提议建立一个新型的多语言感知网络(MGPN ), 以在多语系层面感知到内部和模式间信息。 具体地说, 我们将瞬间检索作为一种多选择阅读理解任务,并将人类阅读战略纳入我们的框架。 一个粗糙的特征编码器和共同关注机制被利用来获得对视频内容、查询背景及其匹配信息的初步感知。 然后引入一个微小的多语系感知和有条件互动模块(MGPNPN), 以加强人类如何解决阅读理解问题的初始感知力。 此外,为了减轻某些现有方法的巨大计算负担,我们进一步设计了一个高效的选择比较模块,并减少隐藏的互联网质量方法的Star-S-S-Slavedal ex ex ex eximal Adal ex ex ex