Latent diffusion models for image generation have crossed a quality threshold which enabled them to achieve mass adoption. Recently, a series of works have made advancements towards replicating this success in the 3D domain, introducing techniques such as point cloud VAE, triplane representation, neural implicit surfaces and differentiable rendering based training. We take another step along this direction, combining these developments in a two-step pipeline consisting of 1) a triplane VAE which can learn latent representations of textured meshes and 2) a conditional diffusion model which generates the triplane features. For the first time this architecture allows conditional and unconditional generation of high quality textured or untextured 3D meshes across multiple diverse categories in a few seconds on a single GPU. It outperforms previous work substantially on image-conditioned and unconditional generation on mesh quality as well as texture generation. Furthermore, we demonstrate the scalability of our model to large datasets for increased quality and diversity. We will release our code and trained models.
翻译:潜在扩散模型在图像生成领域已经取得了质量上的突破,大量被应用。最近,一系列研究在三维领域中取得了进展,引入了点云变分自编码器、Triplane 类型表示、神经隐式表面和可微分渲染训练等技术。我们沿着这个方向迈出了另一步,将这些进展结合在一起,构建了一个两步流程的模型。它包括 1)一个 Triplane 变分自编码器,可以学习纹理和网格的潜在表示;2)一个有条件的扩散模型,生成 Triplane 特征。这个结构首次允许在几秒钟内在单个 GPU 上有条件和无条件生成高质量的带纹理或不带纹理的三维网格,适用于多个不同的类别。它在网格质量和纹理生成方面,显著优于先前的工作。此外,我们展示了我们的模型可以通过大规模数据集提高质量和多样性。我们将公开我们的代码和训练模型。