The goal of this paper is to augment a pre-trained text-to-image diffusion model with the ability of open-vocabulary objects grounding, i.e., simultaneously generating images and segmentation masks for the corresponding visual entities described in the text prompt. We make the following contributions: (i) we insert a grounding module into the existing diffusion model, that can be trained to align the visual and textual embedding space of the diffusion model with only a small number of object categories; (ii) we propose an automatic pipeline for constructing a dataset, that consists of {image, segmentation mask, text prompt} triplets, to train the proposed grounding module; (iii) we evaluate the performance of open-vocabulary grounding on images generated from the text-to-image diffusion model and show that the module can well segment the objects of categories beyond seen ones at training time; (iv) we adopt the guided diffusion model to build a synthetic semantic segmentation dataset, and show that training a standard segmentation model on such dataset demonstrates competitive performance on zero-shot segmentation(ZS3) benchmark, which opens up new opportunities for adopting the powerful diffusion model for discriminative tasks.
翻译:本文的目的是增强一个经过预先训练的文本到图像传播模型,使其具备开放词汇对象的定位能力,即同时为文本提示中描述的相应视觉实体生成图像和截面遮罩,我们做出以下贡献:(一) 我们在现有传播模型中插入一个地面模块,可以培训这些模块,以将扩散模型的视觉和文字嵌入空间与少量物体类别相匹配;(二) 我们提议建立一个自动管道,用于构建数据集,由{图像、分段遮罩、文本提示}三重字组成,以培训拟议的地面模块;(三) 我们评估在文本到图像扩散模型中生成的图像上进行公开投票基础定位的性能,并表明该模块能够很好地分割在培训时间所见的类别以外的类别对象;(四) 我们采用指导扩散模型,以构建一个合成语系分解数据集,并显示在这种数据集上培训一个标准分解模型,展示零发式模型(ZS3)的竞争性性能,这为采用强有力的扩散基准打开了新的机会,以便采用具有歧视性的任务。