Multimodal headline utilizes both video frames and transcripts to generate the natural language title of the videos. Due to a lack of large-scale, manually annotated data, the task of annotating grounded headlines for video is labor intensive and impractical. Previous researches on pre-trained language models and video-language models have achieved significant progress in related downstream tasks. However, none of them can be directly applied to multimodal headline architecture where we need both multimodal encoder and sentence decoder. A major challenge in simply gluing language model and video-language model is the modality balance, which is aimed at combining visual-language complementary abilities. In this paper, we propose a novel approach to graft the video encoder from the pre-trained video-language model on the generative pre-trained language model. We also present a consensus fusion mechanism for the integration of different components, via inter/intra modality relation. Empirically, experiments show that the grafted model achieves strong results on a brand-new dataset collected from real-world applications.
翻译:多式标题使用视频框架和记录誊本来生成视频的自然语言标题。由于缺乏大规模、手工加注的数据,为视频注解的固定标题的任务是劳动密集型和不切实际的。以前对培训前语言模型和视频语言模型的研究在相关的下游任务中取得了显著进展。然而,其中没有一个可以直接应用于我们需要多式编码器和句子解码器的多式联运标题结构。简单插入语言模型和视频语言模型的一个重大挑战是模式平衡,其目的是将视觉语言互补能力结合起来。在本文中,我们提出了一个新颖的方法,将视频编码从经过培训的关于基因化前语言模型的视频语言模型中提取出来。我们还提出了一个通过跨式/内型模式关系整合不同组成部分的共识融合机制。实验表明,从现实应用中收集的品牌新数据集中,被串联的模型取得了强有力的成果。