Conditional image synthesis from layout has recently attracted much interest. Previous approaches condition the generator on object locations as well as class labels but lack fine-grained control over the diverse appearance aspects of individual objects. Gaining control over the image generation process is fundamental to build practical applications with a user-friendly interface. In this paper, we propose a method for attribute controlled image synthesis from layout which allows to specify the appearance of individual objects without affecting the rest of the image. We extend a state-of-the-art approach for layout-to-image generation to additionally condition individual objects on attributes. We create and experiment on a synthetic, as well as the challenging Visual Genome dataset. Our qualitative and quantitative results show that our method can successfully control the fine-grained details of individual objects when modelling complex scenes with multiple objects. Source code, dataset and pre-trained models are publicly available (https://github.com/stanifrolov/AttrLostGAN).
翻译:从布局中获取的有条件图像合成最近引起了很大的兴趣。 以往的方法使生成器在对象位置和类标签上处于条件状态,但缺乏对单个物体不同外观方面的精细控制。 获得图像生成过程的控制对于使用用户友好界面构建实用应用至关重要。 在本文中,我们建议了一种从布局中进行属性控制图像合成的方法,该方法允许在不影响图像其余部分的情况下指定单个对象的外观。 我们将设计图到图像生成的最先进的方法扩展至对属性单个对象附加附加条件。 我们创建并试验了一个合成的和具有挑战性的视觉基因组数据集。 我们的定性和定量结果显示,当用多个对象模拟复杂场景时,我们的方法能够成功地控制单个物体的精细细的图像细节。 源代码、数据集和预先培训的模型可以公开查阅(https://github.com/stanifrolov/AttrLostGAN)。