Generation of images from scene graphs is a promising direction towards explicit scene generation and manipulation. However, the images generated from the scene graphs lack quality, which in part comes due to high difficulty and diversity in the data. We propose MIGS (Meta Image Generation from Scene Graphs), a meta-learning based approach for few-shot image generation from graphs that enables adapting the model to different scenes and increases the image quality by training on diverse sets of tasks. By sampling the data in a task-driven fashion, we train the generator using meta-learning on different sets of tasks that are categorized based on the scene attributes. Our results show that using this meta-learning approach for the generation of images from scene graphs achieves state-of-the-art performance in terms of image quality and capturing the semantic relationships in the scene. Project Website: https://migs2021.github.io/
翻译:从场景图中生成图像是朝着清晰的场景生成和操作方向的一个很有希望的方向。然而,场景图产生的图像缺乏质量,这部分是由于数据高度困难和多样性造成的。我们建议采用MIGS(从场景图中生成Meta图像),一种基于元学习的方法,从图表中生成几张图片,使模型能够适应不同的场景,并通过对不同任务组进行培训提高图像质量。通过以任务驱动的方式对数据进行取样,我们利用根据场景属性分类的不同任务组的元学习对生成者进行培训。我们的结果显示,使用这种元学习方法生成场景图中的图像,在图像质量方面达到最新水平,并捕捉到现场的语义关系。项目网站:https://migs2021.github.io/ https://migs2021.github.