Cross-modal alignment is essential for vision-language pre-training (VLP) models to learn the correct corresponding information across different modalities. For this purpose, inspired by the success of masked language modeling (MLM) tasks in the NLP pre-training area, numerous masked modeling tasks have been proposed for VLP to further promote cross-modal interactions. The core idea of previous masked modeling tasks is to focus on reconstructing the masked tokens based on visible context for learning local-to-local alignment. However, most of them pay little attention to the global semantic features generated for the masked data, resulting in a limited cross-modal alignment ability of global representations. Therefore, in this paper, we propose a novel Semantic Completion Learning (SCL) task, complementary to existing masked modeling tasks, to facilitate global-to-local alignment. Specifically, the SCL task complements the missing semantics of masked data by capturing the corresponding information from the other modality, promoting learning more representative global features which have a great impact on the performance of downstream tasks. Moreover, we present a flexible vision encoder, which enables our model to perform image-text and video-text multimodal tasks simultaneously. Experimental results show that our proposed method obtains state-of-the-art performance on various vision-language benchmarks, such as visual question answering, image-text retrieval, and video-text retrieval.
翻译:跨模态对齐对于视觉语言预训练(VLP)模型学习不同模态之间的正确对应信息至关重要。出于这个目的,受到在NLP预训练领域中遮蔽语言建模(MLM)任务的成功的启发,已经提出了许多遮蔽建模任务用于VLP,以进一步促进跨模态交互。先前遮蔽建模任务的核心思想是基于可见上下文重建遮蔽令牌,以此来学习局部到局部的对齐。然而,大多数工作很少关注遮蔽数据生成的全局语义特征,导致全局表示的跨模态对齐能力有限。因此,在本文中,我们提出了一种新颖的语义完成学习(SCL)任务,作为现有遮蔽建模任务的补充,以促进全局到局部的对齐。具体而言,SCL任务通过从其他模态中捕获相应信息来补充遮蔽数据的缺失语义,促进学习更具代表性的全局特征,这对下游任务的性能有很大影响。此外,我们还提出了一种灵活的视觉编码器,使我们的模型能够同时执行图像-文本和视频-文本多模态任务。实验结果表明,我们提出的方法在各种视觉语言基准测试中均取得了最先进的性能,例如视觉问答,图像-文本检索和视频-文本检索。