The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model's emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions.
翻译:由于对大型模型进行端到端培训,视力和语言培训前的费用越来越令人望而却步。本文件提议BLIP-2,这是一种通用的、有效的培训前战略,将视觉语言培训前的训练用在现成的冷冻的先训练图像编码器和冷冻的大型语言模型中。BLIP-2将模式差距与轻量质查询变异器(先经过两个阶段的培训)相拉近。第一阶段的靴子区-视觉语言代表器从冷冻的图像编码器中学习。第二阶段的靴子区-从冷冻的语言模型中进行视觉到语言的基因化学习。BLIP-2在各种视觉语言任务中取得了最先进的表现,尽管比现有方法的可训练参数要少得多。例如,我们的模型在零弹VQAv2上比Flammingo80B高出8.7%,而可训练参数则少54x。我们还展示了模型正在形成的零发图像到文字生成能力,可以遵循自然语言指令。