We present a StyleGAN2-based deep learning approach for 3D shape generation, called SDF-StyleGAN, with the aim of reducing visual and geometric dissimilarity between generated shapes and a shape collection. We extend StyleGAN2 to 3D generation and utilize the implicit signed distance function (SDF) as the 3D shape representation, and introduce two novel global and local shape discriminators that distinguish real and fake SDF values and gradients to significantly improve shape geometry and visual quality. We further complement the evaluation metrics of 3D generative models with the shading-image-based Fr\'echet inception distance (FID) scores to better assess visual quality and shape distribution of the generated shapes. Experiments on shape generation demonstrate the superior performance of SDF-StyleGAN over the state-of-the-art. We further demonstrate the efficacy of SDF-StyleGAN in various tasks based on GAN inversion, including shape reconstruction, shape completion from partial point clouds, single-view image-based shape generation, and shape style editing. Extensive ablation studies justify the efficacy of our framework design. Our code and trained models are available at https://github.com/Zhengxinyang/SDF-StyleGAN.
翻译:我们为3D 形状生成提出了基于StyleGAN2的深层次学习方法,称为 SDF-StyleGAN,目的是减少生成形状和一个形状收藏之间的视觉和几何差异。我们将StyleGAN2 生成推广到 3D 生成,并使用隐含签名的距离函数(SDF)作为3D 形状代表,引入两个新的全球和地方形状分析器,区分真实和假SDF值和梯度,以显著改善形状的几何和视觉质量。我们进一步补充了3D 基因模型的评估尺度,以阴影为基础的Fr\'echet起始距离(FID)评分,以更好地评估生成形状的视觉质量和形状分布。关于形状生成的实验展示了SDF-StyleGAN的优异性性表现,我们进一步展示了SDF-StyleGAN在基于 GAN 转换的各种任务中的功效,包括形状重建、部分点云的形状完成、单视图像成型的形状生成,以及形状的形状编辑。广度的ABlationalmax研究证明了我们框架设计/AGNS平面的模型设计的功效。