We present an efficient method for joint optimization of topology, materials and lighting from multi-view image observations. Unlike recent multi-view reconstruction approaches, which typically produce entangled 3D representations encoded in neural networks, we output triangle meshes with spatially-varying materials and environment lighting that can be deployed in any traditional graphics engine unmodified. We leverage recent work in differentiable rendering, coordinate-based networks to compactly represent volumetric texturing, alongside differentiable marching tetrahedrons to enable gradient-based optimization directly on the surface mesh. Finally, we introduce a differentiable formulation of the split sum approximation of environment lighting to efficiently recover all-frequency lighting. Experiments show our extracted models used in advanced scene editing, material decomposition, and high quality view interpolation, all running at interactive rates in triangle-based renderers (rasterizers and path tracers). Project website: https://nvlabs.github.io/nvdiffrec/ .
翻译:我们提出了一种从多视角图像观测中联合优化拓扑、材料和光照的高效方法。与最近的多视图重建方法不同,这些方法通常通过神经网络编码产生错综复杂的3D表现形式,我们输出三角形网格和空间可变材质和环境光照,可以在任何传统图形引擎中未经修改地部署。我们利用了最近在可微渲染和基于坐标的网络方面的工作来紧凑地表示体积贴图,同时利用可微分边界并行四面体来使基于梯度的优化直接作用于表面网格上。最后,我们引入了分裂求和近似环境光照的可微分表达式,以有效地恢复全频光照。实验表明,我们提取的模型可用于高级场景编辑、材质分解和高质量的视图插值,在三角形渲染器(光栅化器和路径追踪器)中以交互速度运行。项目网站: https://nvlabs.github.io/nvdiffrec/ 。