Generative models for 2D images has recently seen tremendous progress in quality, resolution and speed as a result of the efficiency of 2D convolutional architectures. However it is difficult to extend this progress into the 3D domain since most current 3D representations rely on custom network components. This paper addresses a central question: Is it possible to directly leverage 2D image generative models to generate 3D shapes instead? To answer this, we propose XDGAN, an effective and fast method for applying 2D image GAN architectures to the generation of 3D object geometry combined with additional surface attributes, like color textures and normals. Specifically, we propose a novel method to convert 3D shapes into compact 1-channel geometry images and leverage StyleGAN3 and image-to-image translation networks to generate 3D objects in 2D space. The generated geometry images are quick to convert to 3D meshes, enabling real-time 3D object synthesis, visualization and interactive editing. Moreover, the use of standard 2D architectures can help bring more 2D advances into the 3D realm. We show both quantitatively and qualitatively that our method is highly effective at various tasks such as 3D shape generation, single view reconstruction and shape manipulation, while being significantly faster and more flexible compared to recent 3D generative models.
翻译:2D 图像的生成模型最近由于 2D 进化结构的效率而在质量、 分辨率和速度方面取得了巨大进步。 但是, 很难将这一进步扩展到 3D 域, 因为大多数当前的 3D 表示形式依赖自定义网络组件。 本文处理一个中心问题 : 能否直接利用 2D 图像基因化模型来生成 3D 形状? 为了回答这个问题, 我们提议 XDGAN, 这是将 2D 图像 GAN 结构应用到 3D 对象生成中的一种有效和快速的方法 。 3D 对象 GAN 结构的生成与 3D 外观特性相结合, 如色质质质质质质质质质质质和质质质质质质。 具体地说, 我们提出了一个将 3D 形状转换成 3D 缩略图象图像和图像到图像转换网络, 在 2D 空间生成 3D 对象时产生3D 的3D, 快速转换为实时 3D 对象合成、 可视化和互动编辑。 此外, 标准 2D 结构可以帮助将更多的 2D 进到 。 我们展示了最新的 3D 和定性 模式, 比较 的 3D 格式,, 和 快速的, 和 快速的 的 模式是, 快速的,, 快速的 快速的 。