Generative models have shown great promise in synthesizing photorealistic 3D objects, but they require large amounts of training data. We introduce SinGRAF, a 3D-aware generative model that is trained with a few input images of a single scene. Once trained, SinGRAF generates different realizations of this 3D scene that preserve the appearance of the input while varying scene layout. For this purpose, we build on recent progress in 3D GAN architectures and introduce a novel progressive-scale patch discrimination approach during training. With several experiments, we demonstrate that the results produced by SinGRAF outperform the closest related works in both quality and diversity by a large margin.
翻译:生成模型在综合光真化的三维天体方面显示了巨大的希望,但它们需要大量的培训数据。 我们引入了SinGRAF, 这是一种3D认知的基因模型,经过对单一场景的一些输入图像的培训。经过培训,SinGRAF对这一三维场景产生了不同的认识,在不同的场景布局下保留了输入的外观。 为此,我们在3D GAN结构方面最近取得的进展的基础上,在培训过程中引入了一种新型的渐进规模的补丁歧视方法。我们通过一些实验,证明SinGRAF的结果大大超越了质量和多样性方面最密切的相关工作。