Vision-and-language (VL) pre-training has proven to be highly effective on various VL downstream tasks. While recent work has shown that fully transformer-based VL models can be more efficient than previous region-feature-based methods, their performance on downstream tasks often degrades significantly. In this paper, we present METER, a Multimodal End-to-end TransformER framework, through which we investigate how to design and pre-train a fully transformer-based VL model in an end-to-end manner. Specifically, we dissect the model designs along multiple dimensions: vision encoders (e.g., CLIPViT, Swin transformer), text encoders (e.g., RoBERTa, DeBERTa), multimodal fusion module (e.g., merged attention vs. co-attention), architectural design (e.g., encoder-only vs. encoder-decoder), and pre-training objectives (e.g., masked image modeling). We conduct comprehensive experiments and provide insights on how to train a performant VL transformer while maintaining fast inference speed. Notably, our best model achieves an accuracy of 77.64% on the VQAv2 test-std set using only 4M images for pre-training, surpassing the state-of-the-art region-feature-based model by 1.04%, and outperforming the previous best fully transformer-based model by 1.6%. Code and models are released at https://github.com/zdou0830/METER.
翻译:视觉和语言( VL) 培训前框架已证明在VL 下游任务中非常有效。 尽管最近的工作表明完全变压的VL模型比以前基于区域地貌的方法更有效率, 下游任务的表现往往会显著下降。 在本文件中, 我们展示了METER, 一个多式端到端变异器框架, 通过这个框架我们调查如何以端到端的方式设计和预演一个基于全变压的VL 模型。 具体地说, 我们将模型设计分为多个层面: 视觉编码器( 例如, CLIPViT, Swin变异器)、 文本编码器( 例如, RoBERTA, DeBERTA) 、 多式联运编码模块( 例如, 合并关注与共置) 、 建筑前设计( 例如, 仅使用 orcoder- only vs. encoder- decoder) 模型, 以及培训前目标( 例如, 掩藏的图像模型基于基础的。 我们进行了全面的实验, 并且提供了如何用 VLA- 4的精确度模型来进行测试, 同时保持了我们的VTreal- develop- developmental Q 。