Recent breakthroughs in text-to-image synthesis have been driven by diffusion models trained on billions of image-text pairs. Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D data and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss. The resulting 3D model of the given text can be viewed from any angle, relit by arbitrary illumination, or composited into any 3D environment. Our approach requires no 3D training data and no modifications to the image diffusion model, demonstrating the effectiveness of pretrained image diffusion models as priors.
翻译:文本到图像合成的最近突破是由数十亿图像- 文本组合所培训的传播模型驱动的。 将这一方法调整为 3D 合成需要大规模三维 标签数据数据集和3D 数据解密的有效结构, 两者目前都不存在。 在这项工作中, 我们绕过这些限制, 使用一个预先训练的 2D 文本到图像综合模型来进行文本到 3D 合成。 我们引入了基于概率密度蒸馏的亏损, 从而能够将 2D 扩散模型作为优化参数图像生成器的先行使用。 在深色模式中, 我们使用这种损失来优化三维 3D 模型( 神经辐射场, 或 NERF), 通过梯度下移来随机初始化三维 3D 模型( 神经辐射场, 或 NERF), 来显示其2D 从随机角度转换成低度损失。 由此产生的三维 文本模型可以从任何角度观察, 被任意照明或合成成任何三维环境 。 我们的方法不需要 3D 培训数据和对图像扩散模型进行修改, 。