Accurate mapping of large-scale environments is an essential building block of most outdoor autonomous systems. Challenges of traditional mapping methods include the balance between memory consumption and mapping accuracy. This paper addresses the problems of achieving large-scale 3D reconstructions with implicit representations using 3D LiDAR measurements. We learn and store implicit features through an octree-based hierarchical structure, which is sparse and extensible. The features can be turned into signed distance values through a shallow neural network. We leverage binary cross entropy loss to optimize the local features with the 3D measurements as supervision. Based on our implicit representation, we design an incremental mapping system with regularization to tackle the issue of catastrophic forgetting in continual learning. Our experiments show that our 3D reconstructions are more accurate, complete, and memory-efficient than current state-of-the-art 3D mapping methods.
翻译:大型环境的精确测绘是大多数户外自主系统的基本组成部分。传统测绘方法的挑战包括记忆消耗和绘图准确性之间的平衡。本文件探讨利用3D LiDAR测量方法进行大规模三维重建的问题。我们通过一个分散和可扩展的奥氏级结构学习和储存隐含特征。这些特征可以通过浅层神经网络转换为有签字的距离值。我们利用二进制交叉星块损失来优化本地特征,以三维测量方法作为监督。根据我们隐含的代表性,我们设计一个具有正规化的渐进式绘图系统,以解决持续学习中的灾难性遗忘问题。我们的实验表明,我们的三维重建比目前最先进的三维绘图方法更准确、完整、记忆效率更高。