The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
翻译:随着大规模模型的端到端训练,视觉-语言预训练的成本变得越来越难以承受。本文提出 BLIP-2,一种通用且高效的预训练策略,它从现有的冻结预训练图像编码器和冻结的大型语言模型中引导视觉-语言预训练。BLIP-2 使用一个轻量级的 Querying Transformer 填补模态差异,该模型经历两个阶段的预训练。第一阶段从冻结的图像编码器中引导视觉-语言表示学习。第二阶段从冻结的语言模型中引导视觉-语言生成学习。尽管可训练参数显著少于现有方法,BLIP-2 在各种视觉-语言任务上均实现了最先进的性能。例如,我们的模型在零样本 VQAv2 上的性能比 Flamingo80B 高出 8.7%,且可训练参数仅为其 54 倍。我们还展示了该模型逐渐具备的零样本图像到文本生成能力,可根据自然语言指令进行操作。