We present a fine-tuning method to improve the appearance of 3D geometries reconstructed from single images. We leverage advances in monocular depth estimation to obtain disparity maps and present a novel approach to transforming 2D normalized disparity maps into 3D point clouds by solving an optimization on the relevant camera parameters, After creating a 3D point cloud from disparity, we introduce a method to combine the new point cloud with existing information to form a more faithful and detailed final geometry. We demonstrate the efficacy of our approach with multiple experiments on both synthetic and real images.
翻译:我们提出了一个微调方法来改善从单一图像中重建的3D地形的外观。我们利用单深估计的进步来获取差异图,并提出了一个新颖的方法,通过解决相关摄像参数的优化,将2D标准化差异图转化为3D点云。 在制造了来自差异的3D点云之后,我们引入了一种方法,将新点云与现有信息结合起来,形成一个更加忠实和详细的最终几何。我们用合成图像和真实图像的多重实验来展示我们的方法的有效性。