We propose a method to extrapolate a 360{\deg} field of view from a single image that allows for user-controlled synthesis of the out-painted content. To do so, we propose improvements to an existing GAN-based in-painting architecture for out-painting panoramic image representation. Our method obtains state-of-the-art results and outperforms previous methods on standard image quality metrics. To allow controlled synthesis of out-painting, we introduce a novel guided co-modulation framework, which drives the image generation process with a common pretrained discriminative model. Doing so maintains the high visual quality of generated panoramas while enabling user-controlled semantic content in the extrapolated field of view. We demonstrate the state-of-the-art results of our method on field of view extrapolation both qualitatively and quantitatively, providing thorough analysis of our novel editing capabilities. Finally, we demonstrate that our approach benefits the photorealistic virtual insertion of highly glossy objects in photographs.
翻译:我们建议一种方法,从单一图像中外推360~deg}的视野领域。 为了做到这一点, 我们建议改进现有的基于 GAN 的成色结构, 以将全景图像标本绘制出来。 我们的方法获得了最先进的结果, 并且优于标准图像质量指标的以往方法。 为了允许有控制的出色合成, 我们引入了一个创新的导引导共同调制框架, 将图像生成过程用共同的未经训练的歧视性模型驱动。 这样做可以保持生成的全色图像的高视觉质量, 同时在外推的视野中使用户控控定的语义内容能够保持高视觉质量。 我们展示了我们观察领域的最新方法在质量和数量上外推结果, 为我们的新编辑能力提供了透彻的分析。 最后, 我们展示了我们的方法有利于在照片中以光真实的虚拟方式虚拟插入高彩色对象。