Automatic 3D content creation has achieved rapid progress recently due to the availability of pre-trained, large language models and image diffusion models, forming the emerging topic of text-to-3D content creation. Existing text-to-3D methods commonly use implicit scene representations, which couple the geometry and appearance via volume rendering and are suboptimal in terms of recovering finer geometries and achieving photorealistic rendering; consequently, they are less effective for generating high-quality 3D assets. In this work, we propose a new method of Fantasia3D for high-quality text-to-3D content creation. Key to Fantasia3D is the disentangled modeling and learning of geometry and appearance. For geometry learning, we rely on a hybrid scene representation, and propose to encode surface normal extracted from the representation as the input of the image diffusion model. For appearance modeling, we introduce the spatially varying bidirectional reflectance distribution function (BRDF) into the text-to-3D task, and learn the surface material for photorealistic rendering of the generated surface. Our disentangled framework is more compatible with popular graphics engines, supporting relighting, editing, and physical simulation of the generated 3D assets. We conduct thorough experiments that show the advantages of our method over existing ones under different text-to-3D task settings. Project page and source codes: https://fantasia3d.github.io/.
翻译:最近由于大型语言模型和图像扩散模型的可用性,自动三维内容创作取得了快速进展。已有的文本生成三维方法通常使用隐式场景表征,通过体积渲染将几何和外观耦合,这在恢复更好的几何形状和实现真实感渲染方面是次优的;因此,它们对生成高质量三维资产的效果较差。在本文中,我们提出了一种新的高质量文本生成三维内容的方法Fantasia3D。Fantasia3D的关键在于解耦几何和外观的建模和学习。在几何学习方面,我们依赖混合场景表示,并提出将表征提取的表面法线编码为图像扩散模型的输入。在外观建模方面,我们将空间变化的双向反射分布函数(BRDF)引入到文本生成三维任务中,并学习表面材料以便渲染生成的表面。我们的解耦式框架更加兼容流行的图形引擎,支持对生成的三维资产进行重新照明、编辑和物理模拟。我们进行了详细的实验,展示了我们的方法在不同的文本生成三维任务设置下的优势。项目页面和源代码:https://fantasia3d.github.io/。