We have witnessed rapid progress on 3D-aware image synthesis, leveraging recent advances in generative visual models and neural rendering. Existing approaches however fall short in two ways: first, they may lack an underlying 3D representation or rely on view-inconsistent rendering, hence synthesizing images that are not multi-view consistent; second, they often depend upon representation network architectures that are not expressive enough, and their results thus lack in image quality. We propose a novel generative model, named Periodic Implicit Generative Adversarial Networks ($\pi$-GAN or pi-GAN), for high-quality 3D-aware image synthesis. $\pi$-GAN leverages neural representations with periodic activation functions and volumetric rendering to represent scenes as view-consistent 3D representations with fine detail. The proposed approach obtains state-of-the-art results for 3D-aware image synthesis with multiple real and synthetic datasets.
翻译:我们目睹了3D觉图像合成的快速进展,利用了在基因视觉模型和神经造影方面的最新进展。然而,现有方法在两种方面不尽相同:第一,它们可能缺乏基本的3D代表,或者依赖与视觉不相容的显示,从而合成了不同观点一致的图像;第二,它们往往依赖表达式网络结构,这些结构不够清晰,其结果因此缺乏图像质量。我们提出了一个新型的基因模型,名为定期隐含基因对立网络($\pi$-GAN 或 pi-GAN),用于高质量的3D觉图像合成。$\pi-GAN 杠杆神经显示器,其定期激活功能和体积显示场面,作为与视觉一致的3D表示器,细节细细。拟议的方法获得了3D觉图像合成与多个真实和合成数据集的最新结果。