We present MatDecompSDF, a novel framework for recovering high-fidelity 3D shapes and decomposing their physically-based material properties from multi-view images. The core challenge of inverse rendering lies in the ill-posed disentanglement of geometry, materials, and illumination from 2D observations. Our method addresses this by jointly optimizing three neural components: a neural Signed Distance Function (SDF) to represent complex geometry, a spatially-varying neural field for predicting PBR material parameters (albedo, roughness, metallic), and an MLP-based model for capturing unknown environmental lighting. The key to our approach is a physically-based differentiable rendering layer that connects these 3D properties to the input images, allowing for end-to-end optimization. We introduce a set of carefully designed physical priors and geometric regularizations, including a material smoothness loss and an Eikonal loss, to effectively constrain the problem and achieve robust decomposition. Extensive experiments on both synthetic and real-world datasets (e.g., DTU) demonstrate that MatDecompSDF surpasses state-of-the-art methods in geometric accuracy, material fidelity, and novel view synthesis. Crucially, our method produces editable and relightable assets that can be seamlessly integrated into standard graphics pipelines, validating its practical utility for digital content creation.
翻译:我们提出了MatDecompSDF,一种从多视角图像恢复高保真三维形状并分解其基于物理的材质属性的新型框架。逆向渲染的核心挑战在于从二维观测中不适定地解耦几何、材质与光照。我们的方法通过联合优化三个神经组件来解决此问题:一个用于表示复杂几何的神经符号距离函数(SDF),一个用于预测PBR材质参数(反照率、粗糙度、金属度)的空间变化神经场,以及一个基于MLP的模型用于捕捉未知环境光照。我们方法的关键在于一个基于物理的可微分渲染层,它将上述三维属性与输入图像连接起来,从而实现端到端的优化。我们引入了一组精心设计的物理先验与几何正则化项,包括材质平滑度损失和Eikonal损失,以有效约束问题并实现鲁棒的分解。在合成与真实数据集(如DTU)上的大量实验表明,MatDecompSDF在几何精度、材质保真度与新视角合成方面均超越了现有先进方法。至关重要的是,我们的方法生成了可编辑且可重照明的资产,能够无缝集成到标准图形管线中,验证了其在数字内容创作中的实际效用。