We introduce M-VADER: a diffusion model (DM) for image generation where the output can be specified using arbitrary combinations of images and text. We show how M-VADER enables the generation of images specified using combinations of image and text, and combinations of multiple images. Previously, a number of successful DM image generation algorithms have been introduced that make it possible to specify the output image using a text prompt. Inspired by the success of those models, and led by the notion that language was already developed to describe the elements of visual contexts that humans find most important, we introduce an embedding model closely related to a vision-language model. Specifically, we introduce the embedding model S-MAGMA: a 13 billion parameter multimodal decoder combining components from an autoregressive vision-language model MAGMA and biases finetuned for semantic search.
翻译:我们引入了M-VADER:一种图像生成的传播模型(DM),可以使用图像和文字的任意组合来指定输出内容。我们展示了M-VADER如何利用图像和文字的组合以及多种图像的组合来生成指定的图像。以前,我们引入了一些成功的DM图像生成算法,使得能够使用文本提示来指定输出图像。受这些模型的成功启发,并被以下理念所引导:语言已经开发,用来描述人类发现最重要的视觉环境要素,我们引入了一种与视觉语言模型密切相关的嵌入模型。具体地说,我们引入了嵌入模型S-MAGMA:130亿个参数多式解码器,将自动反向视觉语言模型的组件结合到磁力搜索中的偏差。