We introduce a high resolution, 3D-consistent image and shape generation technique which we call StyleSDF. Our method is trained on single-view RGB data only, and stands on the shoulders of StyleGAN2 for image generation, while solving two main challenges in 3D-aware GANs: 1) high-resolution, view-consistent generation of the RGB images, and 2) detailed 3D shape. We achieve this by merging a SDF-based 3D representation with a style-based 2D generator. Our 3D implicit network renders low-resolution feature maps, from which the style-based network generates view-consistent, 1024x1024 images. Notably, our SDF-based 3D modeling defines detailed 3D surfaces, leading to consistent volume rendering. Our method shows higher quality results compared to state of the art in terms of visual and geometric quality.
翻译:我们引入了一种高分辨率、3D兼容图像和形状生成技术,我们称之为StyleSDF。我们的方法只接受单视 RGB 数据培训,并站在StyleGAN2的肩上进行图像生成,同时解决3D-aware GANs的两个主要挑战:(1)高分辨率、视觉兼容生成 RGB 图像,和(2) 详细的 3D 形状。我们通过将基于 SDF 的3D 代表与基于风格的 2D 生成器合并来实现这一点。我们的 3D 隐含网络提供了低分辨率地貌图,基于样式的网络从中生成了视图兼容性, 1024x1024 图像。值得注意的是,基于 StyGAN2 的3D 模型定义了详细的 3D 表面, 导致一致的体积转换。我们的方法显示质量高于视觉和几何质量的艺术状态。