Image-text matching plays a central role in bridging vision and language. Most existing approaches only rely on the image-text instance pair to learn their representations, thereby exploiting their matching relationships and making the corresponding alignments. Such approaches only exploit the superficial associations contained in the instance pairwise data, with no consideration of any external commonsense knowledge, which may hinder their capabilities to reason the higher-level relationships between image and text. In this paper, we propose a Consensus-aware Visual-Semantic Embedding (CVSE) model to incorporate the consensus information, namely the commonsense knowledge shared between both modalities, into image-text matching. Specifically, the consensus information is exploited by computing the statistical co-occurrence correlations between the semantic concepts from the image captioning corpus and deploying the constructed concept correlation graph to yield the consensus-aware concept (CAC) representations. Afterwards, CVSE learns the associations and alignments between image and text based on the exploited consensus as well as the instance-level representations for both modalities. Extensive experiments conducted on two public datasets verify that the exploited consensus makes significant contributions to constructing more meaningful visual-semantic embeddings, with the superior performances over the state-of-the-art approaches on the bidirectional image and text retrieval task. Our code of this paper is available at: https://github.com/BruceW91/CVSE.
翻译:图像文本匹配在弥合愿景和语言方面发挥着核心作用。 多数现有方法仅依靠图像文本实例对配方来学习它们的表现, 从而利用它们之间的匹配关系和相应的校正。 这种方法仅利用对等数据中包含的表面关联,而没有考虑任何外部常识知识,这可能妨碍它们的能力,使其无法解释图像和文本之间的更高层次关系。 在此文件中, 我们提议采用一个共识- 有觉识的视觉- 语言嵌入模型( CVSE) 模式, 以将共识信息, 即两种模式之间共享的常识知识纳入图像文本匹配。 具体地说, 利用协商一致信息,从图像说明中计算出语义概念之间的统计共生关联, 并使用构造的概念相关图表, 产生共识- 有觉悟概念( CAC) 表达。 之后, CVSESE根据所利用的共识, 以及两种模式的实级表达式演示, 对两种公共数据集进行了广泛的实验, 核实被利用的共识为构建更有意义的视觉/ CSereal com relictional 工作做出重要的贡献。 。 在纸面上, 我们的纸面/ Breal- breal- diral- lavialdddal