Denoising Diffusion models have shown remarkable capabilities in generating realistic, high-quality and diverse images. However, the extent of controllability and editability with diffusion models is underexplored relative to GANs. Inspired by techniques based on the latent space of GAN models for image manipulation, we propose to train a diffusion model conditioned on two latent codes, a spatial content mask and a flattened style embedding. We rely on the inductive bias of the progressive denoising process of diffusion models to encode pose/layout information in the spatial structure mask and semantic/style information in the style code. We extend the sampling technique from composable diffusion models to allow for some dependence between conditional inputs. This improves the quality of the generations significantly while also providing control over the amount of guidance from each latent code separately as well as from their joint distribution. To further enhance controllability, we vary the level of guidance for structure and style latents based on the denoising timestep. We observe more controllability compared to existing methods and show that without explicit training objectives, diffusion models can be leveraged for effective image manipulation, reference based image translation and style transfer.
翻译:Difoism 扩散模型在生成现实、高质量和多样化图像方面表现出了非凡的能力,然而,相对于GANs而言,扩散模型的可控性和可编辑性没有得到充分探讨。受基于GAN模型潜在空间的图像操纵技术的启发,我们提议以两种潜在代码,即空间内容遮罩和平坦式嵌入为条件,来培训扩散模型。我们依赖扩散模型逐步分解过程的感应偏向性,以将空间结构遮罩和样式代码中的语义/风格信息中的容积信息编码成成。我们从可制成的传播模型中推广取样技术,以允许有条件投入之间的某种依赖性。这极大地提高了各代人的质量,同时也提供了对每种潜在代码中的指导数量以及与其联合分布的控制权。为了进一步增强可控性,我们根据去注时间步骤,对结构和风格潜值的指南水平进行了不同。我们观察到与现有方法相比,比现有方法更具可控性。我们发现,在不明确培训目标的情况下,可扩展模型可以用于有效的图像操纵、参考图像翻译和风格的转换。</s>