Most current multi-modal summarization methods follow a cascaded manner, where an off-the-shelf object detector is first used to extract visual features, then these features are fused with language representations to generate the summary with an encoder-decoder model. The cascaded way cannot capture the semantic alignments between images and paragraphs, which are crucial to a precise summary. In this paper, we propose ViL-Sum to jointly model paragraph-level \textbf{Vi}sion-\textbf{L}anguage Semantic Alignment and Multi-Modal \textbf{Sum}marization. The core of ViL-Sum is a joint multi-modal encoder with two well-designed tasks, image reordering and image selection. The joint multi-modal encoder captures the interactions between modalities, where the reordering task guides the model to learn paragraph-level semantic alignment and the selection task guides the model to selected summary-related images in the final summary. Experimental results show that our proposed ViL-Sum significantly outperforms current state-of-the-art methods. In further analysis, we find that two well-designed tasks and joint multi-modal encoder can effectively guide the model to learn reasonable paragraphs-images and summary-images relations.
翻译:多数当前多式总和方法都采用级联方式, 首先使用现成对象探测器来提取视觉特征, 然后将这些特征与语言表达方式结合, 以编码器- 解码器模型生成摘要。 级联方式无法捕捉图像和段落之间的语义对齐, 这些图像和段落对于精确摘要至关重要。 在本文中, 我们建议 Vil- Sum 联合模拟段落级别 \ textb{ Vi}sion- textb{L}anguage Smanictical 匹配和多式 Modal \ textbf{Sum}marization。 Vil- Sum 核心是一个联合多式编码器, 包含两种设计良好的任务, 图像重新排序和图像选择。 联合多式编码器捕捉了模式之间的相互作用, 在这些模式中, 重新排序任务指导模型学习段落级的语义对齐, 选择任务指导模型在最后摘要中选择选定与摘要相关的图像。 实验结果显示, 我们提议的 Vil- Sum- Sum- 快速构建了当前学习方法, 和 成功设计了我们当前学习的多式导方法。