As an important and challenging problem in vision-language tasks, referring expression comprehension (REC) aims to localize the target object specified by a given referring expression. Recently, most of the state-of-the-art REC methods mainly focus on multi-modal fusion while overlooking the inherent hierarchical information contained in visual and language encoders. Considering that REC requires visual and textual hierarchical information for accurate target localization, and encoders inherently extract features in a hierarchical fashion, we propose to effectively utilize the rich hierarchical information contained in different layers of visual and language encoders. To this end, we design a Cross-level Multi-modal Fusion (CMF) framework, which gradually integrates visual and textual features of multi-layer through intra- and inter-modal. Experimental results on RefCOCO, RefCOCO+, RefCOCOg, and ReferItGame datasets demonstrate the proposed framework achieves significant performance improvements over state-of-the-art methods.
翻译:作为视觉语言任务中的一个重要和具有挑战性的问题,提及表达理解(REC)的目的是将特定引用表达中指定的目标对象本地化。最近,大多数最先进的REC方法主要侧重于多模式融合,同时忽略视觉和语言编码器中固有的等级信息。考虑到REC需要视觉和文字等级信息,以便准确的目标定位,而编码器本身以等级方式提取特征,我们提议有效利用不同层次视觉和语言编码器中包含的丰富的等级信息。为此,我们设计了一个跨层次多模式聚合框架,通过内部和相互模式逐步整合多层次的视觉和文字特征。RefCOCO、RefCO+、RefCOCOg和RefITEGame数据集的实验结果显示拟议框架取得了相对于最新方法的显著绩效改进。