Robots benefit from high-fidelity reconstructions of their environment, which should be geometrically accurate and photorealistic to support downstream tasks. While this can be achieved by building distance fields from range sensors and radiance fields from cameras, realising scalable incremental mapping of both fields consistently and at the same time with high quality is challenging. In this paper, we propose a novel map representation that unifies a continuous signed distance field and a Gaussian splatting radiance field within an elastic and compact point-based implicit neural map. By enforcing geometric consistency between these fields, we achieve mutual improvements by exploiting both modalities. We present a novel LiDAR-visual SLAM system called PINGS using the proposed map representation and evaluate it on several challenging large-scale datasets. Experimental results demonstrate that PINGS can incrementally build globally consistent distance and radiance fields encoded with a compact set of neural points. Compared to state-of-the-art methods, PINGS achieves superior photometric and geometric rendering at novel views by constraining the radiance field with the distance field. Furthermore, by utilizing dense photometric cues and multi-view consistency from the radiance field, PINGS produces more accurate distance fields, leading to improved odometry estimation and mesh reconstruction. We also provide an open-source implementation of PING at: https://github.com/PRBonn/PINGS.
翻译:机器人得益于对其环境的高保真重建,这些重建应具备几何精度与照片级真实感,以支持下游任务。虽然这可以通过从距离传感器构建距离场和从相机构建辐射场来实现,但如何以高质量同时实现两个场的一致、可扩展增量建图仍具挑战性。本文提出一种新颖的地图表征方法,它将连续的符号距离场与高斯溅射辐射场统一在一个弹性且紧凑的基于点的隐式神经地图中。通过强制这两个场之间的几何一致性,我们利用两种模态实现了相互促进。我们提出了一种名为PINGS的新型LiDAR-视觉SLAM系统,该系统采用所提出的地图表征,并在多个具有挑战性的大规模数据集上进行了评估。实验结果表明,PINGS能够增量式构建全局一致的距离场和辐射场,并使用一组紧凑的神经点进行编码。与最先进的方法相比,PINGS通过利用距离场约束辐射场,在新视角下实现了卓越的光度与几何渲染效果。此外,通过利用辐射场提供的密集光度线索和多视角一致性,PINGS生成了更精确的距离场,从而改进了里程计估计和网格重建。我们还在以下地址提供了PINGS的开源实现:https://github.com/PRBonn/PINGS。