The explosive increase of multimodal data makes a great demand in many cross-modal applications that follow the strict prior related assumption. Thus researchers study the definition of cross-modal correlation category and construct various classification systems and predictive models. However, those systems pay more attention to the fine-grained relevant types of cross-modal correlation, ignoring lots of implicit relevant data which are often divided into irrelevant types. What's worse is that none of previous predictive models manifest the essence of cross-modal correlation according to their definition at the modeling stage. In this paper, we present a comprehensive analysis of the image-text correlation and redefine a new classification system based on implicit association and explicit alignment. To predict the type of image-text correlation, we propose the Association and Alignment Network according to our proposed definition (namely AnANet) which implicitly represents the global discrepancy and commonality between image and text and explicitly captures the cross-modal local relevance. The experimental results on our constructed new image-text correlation dataset show the effectiveness of our model.
翻译:多式联运数据的爆炸性增加使许多采用严格前相关假设的跨模式应用产生巨大的需求,因此,研究人员研究跨模式相关类别的定义,并构建各种分类系统和预测模型,然而,这些系统更加关注细化的相关跨模式相关类型,忽视许多隐含的相关数据,而这些数据往往分为不相关类型。更糟糕的是,以前的预测模型在建模阶段没有根据它们的定义显示交叉模式相关的实质。我们在本文件中对图像-文本相关关系进行了全面分析,并重新定义了基于隐含关联和明确一致的新分类系统。为了预测图像-文本相关类型,我们提议根据我们拟议的定义(即AnANet)建立协会和协调网络,这隐含了图像和文本之间的全球差异和共性,并明确抓住了跨模式的本地相关性。我们构建的新图像-文本相关数据集的实验结果显示了我们模型的有效性。