The goal of inverse rendering is to decompose geometry, lights, and materials given pose multi-view images. To achieve this goal, we propose neural direct and joint inverse rendering, NDJIR. Different from prior works which relies on some approximations of the rendering equation, NDJIR directly addresses the integrals in the rendering equation and jointly decomposes geometry: signed distance function, lights: environment and implicit lights, materials: base color, roughness, specular reflectance using the powerful and flexible volume rendering framework, voxel grid feature, and Bayesian prior. Our method directly uses the physically-based rendering, so we can seamlessly export an extracted mesh with materials to DCC tools and show material conversion examples. We perform intensive experiments to show that our proposed method can decompose semantically well for real object in photogrammetric setting and what factors contribute towards accurate inverse rendering.
翻译:反向转换的目标是分解几何、 亮点和提供的材料是多视图图像。 为了实现这一目标, 我们提出神经直接和联合反向转换, NDJIR 。 不同于以前依赖某些转换方程近似值的工程, NDJIR 直接处理转换方程的构件, 并共同分解几何: 签名距离功能, 灯光: 环境和隐含灯光, 材料: 底色、 粗度、 使用强大和灵活的体积转换框架、 voxel 网格特征和 Bayesian 之前的视觉反射 。 我们的方法直接使用物理制成, 这样我们可以无缝地将提取的网格输出到 DCC 工具, 并展示材料转换示例 。 我们进行了密集实验, 以显示我们提议的方法可以在摄影测量环境中将真实物体解析精度, 以及哪些因素有助于准确反向转换。