Semantic image synthesis is a process for generating photorealistic images from a single semantic mask. To enrich the diversity of multimodal image synthesis, previous methods have controlled the global appearance of an output image by learning a single latent space. However, a single latent code is often insufficient for capturing various object styles because object appearance depends on multiple factors. To handle individual factors that determine object styles, we propose a class- and layer-wise extension to the variational autoencoder (VAE) framework that allows flexible control over each object class at the local to global levels by learning multiple latent spaces. Furthermore, we demonstrate that our method generates images that are both plausible and more diverse compared to state-of-the-art methods via extensive experiments with real and synthetic datasets inthree different domains. We also show that our method enables a wide range of applications in image synthesis and editing tasks.
翻译:语义图像合成是一个从单一语义掩码生成光现实图像的过程。 为了丰富多式图像合成的多样性, 以往的方法通过学习单一潜伏空间来控制输出图像的全球外观。 但是, 单个潜伏代码往往不足以捕捉各种物体样式, 因为物体外观取决于多种因素。 要处理决定物体样式的个别因素, 我们建议从等级和层次角度扩展变异自动编码框架, 通过学习多个潜伏空间来灵活控制地方到全球的每个对象类别。 此外, 我们证明我们的方法通过在三个不同领域进行真实和合成数据集的广泛实验, 生成的图像既合理, 也比最先进的方法更为多样。 我们还表明, 我们的方法可以在图像合成和编辑任务中进行广泛的应用。