We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network. Specifically, we introduce Mixture-of-Modality-Experts (MoME) Transformer, where each block contains a pool of modality-specific experts and a shared self-attention layer. Because of the modeling flexibility of MoME, pretrained VLMo can be fine-tuned as a fusion encoder for vision-language classification tasks, or used as a dual encoder for efficient image-text retrieval. Moreover, we propose a stagewise pre-training strategy, which effectively leverages large-scale image-only and text-only data besides image-text pairs. Experimental results show that VLMo achieves state-of-the-art results on various vision-language tasks, including VQA, NLVR2 and image-text retrieval. The code and pretrained models are available at https://aka.ms/vlmo.
翻译:我们提出了一个统一的视觉语言预科模型(VLMO),该模型通过模块化变异器网络,共同学习双编码器和组合编码器。具体地说,我们引入了Mixture of Modality-Experts(Mome)变异器(Mome),其中每个区块都有一组特定模式的专家和一个共同的自我关注层。由于模模范的模范灵活性,预先培训的VLMO可以微调,作为用于愿景语言分类任务的聚合编码器,或用作高效图像文本检索的双编码器。此外,我们提出了一个分阶段的培训前战略,除了图像文本配对之外,有效地利用大规模只显示图像和只显示文本的数据。实验结果表明,VLMO在各种视觉语言任务(包括VQA、NLVR2和图像文本检索)上实现了最新艺术成果。代码和预培训模型可在https://akas/vlmo查阅。