Most current multi-modal summarization methods follow a cascaded manner, where an off-the-shelf object detector is first used to extract visual features, then these features are fused with language representations to generate the summary with an encoder-decoder model. The cascaded way cannot capture the semantic alignments between images and paragraphs, which are crucial to a precise summary. In this paper, we propose ViL-Sum to jointly model paragraph-level \textbf{Vi}sion-\textbf{L}anguage Semantic Alignment and Multi-Modal \textbf{Sum}marization. The core of ViL-Sum is a joint multi-modal encoder with two well-designed tasks, image reordering and image selection. The joint multi-modal encoder captures the interactions between modalities, where the reordering task guides the model to learn paragraph-level semantic alignment and the selection task guides the model to selected summary-related images in the final summary. Experimental results show that our proposed ViL-Sum significantly outperforms current state-of-the-art methods. In further analysis, we find that two well-designed tasks and joint multi-modal encoder can effectively guide the model to learn reasonable paragraphs-images and summary-images relations.
翻译:大多数当前的多模态摘要方法采用级联方式,首先使用现成的目标检测器提取视觉特征,然后使用编码器-解码器模型将这些特征与语言表示融合以生成摘要。级联方式无法捕捉图像和段落之间的语义对齐,这对于精确摘要非常重要。因此,本文提出了ViL-Sum,联合建模段落级别的视觉-语言语义对齐和多模态摘要。ViL-Sum的核心是一个联合多模态编码器,其中包含两个精心设计的任务:图像重新排序和图像选择。联合多模态编码器可以捕捉不同模态之间的交互作用,其中,重新排序任务引导模型学习段落级别的语义对齐,选择任务则引导模型在最终摘要中选择与摘要相关的图像。实验结果表明,我们提出的ViL-Sum显著优于当前的最先进方法。进一步分析表明,两个精心设计的任务和联合多模态编码器可以有效地引导模型学习合理的段落-图像和摘要-图像之间的关系。