DreamFusion has recently demonstrated the utility of a pre-trained text-to-image diffusion model to optimize Neural Radiance Fields (NeRF), achieving remarkable text-to-3D synthesis results. However, the method has two inherent limitations: (a) extremely slow optimization of NeRF and (b) low-resolution image space supervision on NeRF, leading to low-quality 3D models with a long processing time. In this paper, we address these limitations by utilizing a two-stage optimization framework. First, we obtain a coarse model using a low-resolution diffusion prior and accelerate with a sparse 3D hash grid structure. Using the coarse representation as the initialization, we further optimize a textured 3D mesh model with an efficient differentiable renderer interacting with a high-resolution latent diffusion model. Our method, dubbed Magic3D, can create high quality 3D mesh models in 40 minutes, which is 2x faster than DreamFusion (reportedly taking 1.5 hours on average), while also achieving higher resolution. User studies show 61.7% raters to prefer our approach over DreamFusion. Together with the image-conditioned generation capabilities, we provide users with new ways to control 3D synthesis, opening up new avenues to various creative applications.
翻译:DreamFusion最近展示了预先培训的文字到图像扩散模型对优化神经辐射场(NERF)的效用,实现了显著的文本到3D合成结果。然而,该方法有两个内在的局限性:(a) NERF的优化极为缓慢,和(b) NERF的低分辨率图像空间监督,导致处理时间长的3D模型质量低。在本文中,我们利用一个两阶段优化框架来解决这些限制。首先,我们获得了一个粗糙模型,先使用低分辨率的传播,然后以稀薄的 3D hash 网格结构加速。使用粗略的表达式作为初始化,我们进一步优化了一种3D网格模型,以高效的可变异成体与高分辨率潜在扩散模型互动。我们的方法,即假魔法3D,可以在40分钟内创建高质量的3D网模模型,比DreamFusion(报告平均花费1.5小时)要快2x快,同时实现更高的分辨率。用户研究表明,61.7%的标数者更喜欢我们的方法,而不是DreadFusion。同时,我们提供了各种图像合成能力,我们提供了各种新一代的生成的生成能力。