Denoising Diffusion models have shown remarkable performance in generating diverse, high quality images from text. Numerous techniques have been proposed on top of or in alignment with models like Stable Diffusion and Imagen that generate images directly from text. A lesser explored approach is DALLE-2's two step process comprising a Diffusion Prior that generates a CLIP image embedding from text and a Diffusion Decoder that generates an image from a CLIP image embedding. We explore the capabilities of the Diffusion Prior and the advantages of an intermediate CLIP representation. We observe that Diffusion Prior can be used in a memory and compute efficient way to constrain the generation to a specific domain without altering the larger Diffusion Decoder. Moreover, we show that the Diffusion Prior can be trained with additional conditional information such as color histogram to further control the generation. We show quantitatively and qualitatively that the proposed approaches perform better than prompt engineering for domain specific generation and existing baselines for color conditioned generation. We believe that our observations and results will instigate further research into the diffusion prior and uncover more of its capabilities.
翻译:DALLE-2 的两步进程, 包括“ 扩散前”, 从文字中生成 CLIP 图像, 并生成 CLIP 图像。 我们探索了“ 扩散前” 的能力和中间 CLIP 代表的优点。 我们观察到, “ 扩散前” 的功能和中间 CLIP 代表的优点。 我们观察到, “ 扩散前” 可用于记忆中, 并计算出有效的方式, 将“ 扩散” 限制在特定领域, 而不改变更大的“ 扩散脱coder ” 。 此外, 我们显示,“ 扩散前” 可以用附加条件的信息培训, 如颜色直方图, 以进一步控制生成。 我们从数量和质量上显示, 拟议的方法比对特定域生成的快速工程和现有有色调生成的基线要好。 我们相信, 我们的观察和结果将激发进一步的研究, 在传播之前, 并发现更多能力。