Due to inevitable noises introduced during scanning and quantization, 3D reconstruction via RGB-D sensors suffers from errors both in geometry and texture, leading to artifacts such as camera drifting, mesh distortion, texture ghosting, and blurriness. Given an imperfect reconstructed 3D model, most previous methods have focused on the refinement of either geometry, texture, or camera pose. Or different optimization schemes and objectives for optimizing each component have been used in previous joint optimization methods, forming a complicated system. In this paper, we propose a novel optimization approach based on differentiable rendering, which integrates the optimization of camera pose, geometry, and texture into a unified framework by enforcing consistency between the rendered results and the corresponding RGB-D inputs. Based on the unified framework, we introduce a joint optimization approach to fully exploit the inter-relationships between geometry, texture, and camera pose, and describe an adaptive interleaving strategy to improve optimization stability and efficiency. Using differentiable rendering, an image-level adversarial loss is applied to further improve the 3D model, making it more photorealistic. Experiments on synthetic and real data using quantitative and qualitative evaluation demonstrated the superiority of our approach in recovering both fine-scale geometry and high-fidelity texture.
翻译:由于扫描和量化过程中出现的不可避免的噪音,通过 RGB-D 传感器进行的3D重建在几何和纹理上都存在错误,导致诸如照相机漂移、网状扭曲、纹理隐形和模糊不清等艺术品。鉴于重建后的3D模型不完善,大多数以前的方法都侧重于改进几何、纹理或照相机表面;或者在以前的联合优化方法中使用了优化每个组成部分的不同优化办法和目标,形成了一个复杂的系统。在本文件中,我们提议了一种基于不同图像的新颖优化办法,将摄影机面貌、几何和纹理的优化纳入一个统一的框架,通过加强已实现的结果与相应的RGB-D投入之间的一致性。在统一框架的基础上,我们采用了一种联合优化办法,充分利用几何、纹理和照相机表面之间的相互联系,并描述一种适应性互为优化的战略,以提高优化稳定性和效率。在作出不同假设时,将图像水平的对等损失用于进一步改进3D模型,使其在已实现的结果和相应的RGB-D 投入之间保持一致性。根据统一的框架,我们采用了一种联合优化的方法,充分利用了测量、纹度高度的合成和高度的合成数据。