Modeling scene geometry using implicit neural representation has revealed its advantages in accuracy, flexibility, and low memory usage. Previous approaches have demonstrated impressive results using color or depth images but still have difficulty handling poor light conditions and large-scale scenes. Methods taking global point cloud as input require accurate registration and ground truth coordinate labels, which limits their application scenarios. In this paper, we propose a new method that uses sparse LiDAR point clouds and rough odometry to reconstruct fine-grained implicit occupancy field efficiently within a few minutes. We introduce a new loss function that supervises directly in 3D space without 2D rendering, avoiding information loss. We also manage to refine poses of input frames in an end-to-end manner, creating consistent geometry without global point cloud registration. As far as we know, our method is the first to reconstruct implicit scene representation from LiDAR-only input. Experiments on synthetic and real-world datasets, including indoor and outdoor scenes, prove that our method is effective, efficient, and accurate, obtaining comparable results with existing methods using dense input.
翻译:使用隐性神经表示法模拟场景几何方法显示了其精密性、灵活性和低记忆使用率的优点。以前的方法已经展示了使用颜色或深度图像的令人印象深刻的结果,但是仍然难以处理低光度和大片场景。将全球点云作为投入的方法需要准确的登记和地面真相协调标签,这限制了其应用设想。在本文中,我们提出一种新的方法,利用稀疏的LIDAR点云和粗略的观察测量法,在几分钟内有效地重建细微的隐性占用场。我们引入一种新的损失功能,直接监督3D空间,而不进行2D的转换,避免信息损失。我们还设法以端到端的方式改进输入框架的构成,在没有全球点云登记的情况下建立一致的几何方法。据我们所知,我们的方法是首先从仅使用LIDAR的输入法重建隐含的场景。对合成和实际世界数据集的实验,包括室内和室外场,证明我们的方法是有效、高效和准确的,并利用现有的密集输入方法取得可比的结果。</s>