Probabilistic diffusion models have achieved state-of-the-art results for image synthesis, inpainting, and text-to-image tasks. However, they are still in the early stages of generating complex 3D shapes. This work proposes Diffusion-SDF, a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds. We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks. Neural SDFs are implicit functions and diffusing them amounts to learning the reversal of their neural network weights, which we solve using a custom modulation module. Extensive experiments show that our method is capable of both realistic unconditional generation and conditional generation from partial inputs. This work expands the domain of diffusion models from learning 2D, explicit representations, to 3D, implicit representations.
翻译:概率扩散模型在图像合成、绘图和文字到图像任务方面已经取得了最先进的结果。 但是,它们仍处于生成复杂的 3D 形状的早期阶段。 这项工作提出Difulte- SDF, 一个形状完成、 单一视图重建和重建的基因模型, 以及重建真实扫描点云。 我们用神经签名的距离函数( SDFs)作为我们的三维代表, 通过神经网络对各种信号( 如点云、 2D 图像) 的几何参数进行参数比较。 神经 SDF 是隐含的功能, 并把它们分解为学习神经网络重量的逆转, 我们用自定义的调制模块来解决。 广泛的实验表明, 我们的方法既能够现实的无条件生成,也能从部分投入中有条件生成。 这项工作扩大了传播模型的范围, 从学习 2D、 明确表达、 3D 隐含的表达方式到 3D 。</s>