Large-scale pretraining is fast becoming the norm in Vision-Language (VL) modeling. However, prevailing VL approaches are limited by the requirement for labeled data and the use of complex multi-step pretraining objectives. We present MAGMA - a simple method for augmenting generative language models with additional modalities using adapter-based finetuning. Building on Frozen, we train a series of VL models that autoregressively generate text from arbitrary combinations of visual and textual input. The pretraining is entirely end-to-end using a single language modeling objective, simplifying optimization compared to previous approaches. Importantly, the language model weights remain unchanged during training, allowing for transfer of encyclopedic knowledge and in-context learning abilities from language pretraining. MAGMA outperforms Frozen on open-ended generative tasks, achieving state of the art results on the OKVQA benchmark and competitive results on a range of other popular VL benchmarks, while pretraining on 0.2% of the number of samples used to train SimVLM.
翻译:大规模预备培训正在迅速成为视觉-语言(VL)模型的规范,然而,通用的VL方法受到标签数据要求和复杂的多步培训前目标的使用的限制。我们介绍了MAGMA(一种增强基因化语言模型的简单方法,使用基于适应器的微调补充模式)。在Frozen的基础上,我们培训了一系列VL模型,这些模型自动地从视觉和文字输入的任意组合中产生文字。预培训完全是端对端的,使用单一语言模型的目标,简化了与以往方法相比的优化。重要的是,语言模型的重量在培训期间保持不变,允许从语言预培训中转让百科知识和文本学习能力。MAGMA(MAGMA)在开放式的基因化任务上优于Frozen,在CPVQA基准和一系列其他广受欢迎的VL基准上取得最先进的成果,同时对用于培训SimVLM的样本数的0.2%进行预培训。