We propose a novel geometric and photometric 3D mapping pipeline for accurate and real-time scene reconstruction from monocular images. To achieve this, we leverage recent advances in dense monocular SLAM and real-time hierarchical volumetric neural radiance fields. Our insight is that dense monocular SLAM provides the right information to fit a neural radiance field of the scene in real-time, by providing accurate pose estimates and depth-maps with associated uncertainty. With our proposed uncertainty-based depth loss, we achieve not only good photometric accuracy, but also great geometric accuracy. In fact, our proposed pipeline achieves better geometric and photometric accuracy than competing approaches (up to 179% better PSNR and 86% better L1 depth), while working in real-time and using only monocular images.
翻译:我们提出一个新的几何和光度三维绘图管道,以便用单视图像进行准确和实时的场景重建。 为了实现这一目标,我们利用了高密度单望远镜SLM和实时分层体积神经弧度场的最新进展。 我们的洞察力是,密度单角度SLM提供了适合实时场景神经亮度的正确信息,提供了准确的外观估计和深度图以及相关的不确定性。 由于我们提出的基于不确定性的深度损失,我们不仅实现了良好的光度精确度,而且实现了很高的几何精确度。 事实上,我们提议的管道比竞争性方法(达到179%的PSNR和86%的L1深度)实现了更好的几何和光度精确度,同时实时工作并只使用单眼图像。