Image-guided story ending generation (IgSEG) is to generate a story ending based on given story plots and ending image. Existing methods focus on cross-modal feature fusion but overlook reasoning and mining implicit information from story plots and ending image. To tackle this drawback, we propose a multimodal event transformer, an event-based reasoning framework for IgSEG. Specifically, we construct visual and semantic event graphs from story plots and ending image, and leverage event-based reasoning to reason and mine implicit information in a single modality. Next, we connect visual and semantic event graphs and utilize cross-modal fusion to integrate different-modality features. In addition, we propose a multimodal injector to adaptive pass essential information to decoder. Besides, we present an incoherence detection to enhance the understanding context of a story plot and the robustness of graph modeling for our model. Experimental results show that our method achieves state-of-the-art performance for the image-guided story ending generation.
翻译:图像引导故事结束生成( IgSEG) 是要根据特定故事图和结束图像生成一个故事结束故事。 现有方法侧重于跨模式特征融合, 但忽略了推理和挖掘故事图和结束图像的隐含信息。 为了解决这一缺陷, 我们提议了一个多式事件变压器, 一个以事件为基础的推理框架。 具体地说, 我们从故事图和结束图像中构建视觉和语义事件图, 以及将事件推理用到理性和地雷隐含信息的单一模式。 下一步, 我们连接视觉和语义事件图, 并利用跨模式融合不同模式特征。 此外, 我们提出一个多式注射器, 将基本信息适应性传递到解码器 。 此外, 我们提出一个不连贯的检测方法, 以增进对故事图的认知, 以及我们模型的图形建模的坚固性。 实验结果显示, 我们的方法可以实现图像引导故事结束一代的状态和艺术性表现 。