Large scale text-guided diffusion models have garnered significant attention due to their ability to synthesize diverse images that convey complex visual concepts. This generative power has more recently been leveraged to perform text-to-3D synthesis. In this work, we present a technique that harnesses the power of latent diffusion models for editing existing 3D objects. Our method takes oriented 2D images of a 3D object as input and learns a grid-based volumetric representation of it. To guide the volumetric representation to conform to a target text prompt, we follow unconditional text-to-3D methods and optimize a Score Distillation Sampling (SDS) loss. However, we observe that combining this diffusion-guided loss with an image-based regularization loss that encourages the representation not to deviate too strongly from the input object is challenging, as it requires achieving two conflicting goals while viewing only structure-and-appearance coupled 2D projections. Thus, we introduce a novel volumetric regularization loss that operates directly in 3D space, utilizing the explicit nature of our 3D representation to enforce correlation between the global structure of the original and edited object. Furthermore, we present a technique that optimizes cross-attention volumetric grids to refine the spatial extent of the edits. Extensive experiments and comparisons demonstrate the effectiveness of our approach in creating a myriad of edits which cannot be achieved by prior works.
翻译:大规模的文本引导扩散模型因其合成能力而引起了广泛关注,这种合成能力可以传达复杂的视觉概念。最近,这种生成能力已经被利用来进行文本到三维合成。在这项工作中,我们提出了一种利用潜在扩散模型编辑现有三维对象的技术。我们的方法以定向的二维图像作为输入,并学习该对象的基于网格的体积表示。为了将该体积表示引导到符合目标文本提示的方向,我们遵循无条件的文本到三维方法,并优化一个叫做“得分蒸馏采样(SDS)”的损失函数。然而,我们观察到将这个扩散引导的损失函数与一个基于图像的正则化损失函数相结合是具有挑战性的,因为这需要通过仅查看结构和外观耦合的二维投影来实现两个相互冲突的目标。因此,我们引入了一种新颖的体积正则化损失函数,该损失函数直接在三维空间中操作,利用我们的三维表示的显式性质来强制原始和编辑后的对象的全局结构之间的相关性。此外,我们提出了一种优化交叉注意力的体积网格的技术,以改善编辑的空间范围。大量的实验和比较证明了我们的方法在创建一些先前的工作无法实现的编辑方面的有效性。