Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.
翻译:能够迅速适应新任务的建模模型,只使用少数附加说明的例子,这是多式联运机学习研究的公开挑战。我们引入了Flamingo(Flamingo),这是一个具有这种能力的视觉语言模型(VLM)的大家庭。我们提出关键的建筑创新建议,以便:(一) 连接强大的先入为主的视觉和语言型模型,(二) 处理任意的视觉和文字间数据,以及(三) 无缝地取用图像或视频作为投入。由于这些模型的灵活性,Flammingo模型可以被训练成大型多式网络公司,其中载有任意的相互脱节的文字和图像,这是将它们置于文本中的关键。我们建议对模型进行彻底评估,探索和测量其迅速适应各种图像和视频任务的能力。其中包括视觉问答等开放式任务,在这种任务中,模型必须回答一个问题;说明任务,评估描述一个场景或事件的能力;以及诸如多选取的模型解析等近身任务,在多选样的解题上进行多式解说。在任何地点都能够完成一个简单的具体任务。