Integrating multimodal knowledge for abstractive summarization task is a work-in-progress research area, with present techniques inheriting fusion-then-generation paradigm. Due to semantic gaps between computer vision and natural language processing, current methods often treat multiple data points as separate objects and rely on attention mechanisms to search for connection in order to fuse together. In addition, missing awareness of cross-modal matching from many frameworks leads to performance reduction. To solve these two drawbacks, we propose an Iterative Contrastive Alignment Framework (ICAF) that uses recurrent alignment and contrast to capture the coherences between images and texts. Specifically, we design a recurrent alignment (RA) layer to gradually investigate fine-grained semantical relationships between image patches and text tokens. At each step during the encoding process, cross-modal contrastive losses are applied to directly optimize the embedding space. According to ROUGE, relevance scores, and human evaluation, our model outperforms the state-of-the-art baselines on MSMO dataset. Experiments on the applicability of our proposed framework and hyperparameters settings have been also conducted.
翻译:由于计算机视觉和自然语言处理之间的语义差距,目前的方法往往将多个数据点作为独立的对象,并依靠关注机制搜索连接,以便结合在一起。此外,许多框架缺乏对跨模式匹配的认识,导致性能下降。为了解决这两个缺点,我们提议了一个循环对比匹配框架(ICAF),利用经常性对齐和对比来获取图像和文本的一致性。具体地说,我们设计了一个经常性对齐层(RA),以逐步调查图像补丁和文本符号之间的细微的语义关系。在编码过程中的每一个步骤中,交叉模式对比性损失都用于直接优化嵌入空间。根据ROUGE、相关性分数和人类评估,我们的模型超越了MSMO数据集上的最新基线。还进行了关于我们拟议框架和超分数设置的适用性实验。