Depth perception is considered an invaluable source of information in the context of 3D mapping and various robotics applications. However, point cloud maps acquired using consumer-level light detection and ranging sensors (lidars) still suffer from bias related to local surface properties such as measuring beam-to-surface incidence angle, distance, texture, reflectance, or illumination conditions. This fact has recently motivated researchers to exploit traditional filters, as well as the deep learning paradigm, in order to suppress the aforementioned depth sensors error while preserving geometric and map consistency details. Despite the effort, depth correction of lidar measurements is still an open challenge mainly due to the lack of clean 3D data that could be used as ground truth. In this paper, we introduce two novel point cloud map consistency losses, which facilitate self-supervised learning on real data of lidar depth correction models. Specifically, the models exploit multiple point cloud measurements of the same scene from different view-points in order to learn to reduce the bias based on the constructed map consistency signal. Complementary to the removal of the bias from the measurements, we demonstrate that the depth correction models help to reduce localization drift. Additionally, we release a data set that contains point cloud data captured in an indoor corridor environment with precise localization and ground truth mapping information.
翻译:深度感知被视为3D绘图和各种机器人应用背景下的宝贵信息来源,然而,利用消费者光度光度探测和测距传感器(激光器)获得的点云图仍然受到与当地表面特性有关的偏差,例如测量波束到地表的发生率角度、距离、质地、反射或照明条件等。这一事实最近促使研究人员利用传统过滤器和深层学习范式,以抑制上述深度传感器错误,同时保留几何和地图一致性细节。尽管做出了努力,但对利达尔测量的深度校正仍是一个公开的挑战,这主要是因为缺乏清洁的3D数据,可以用作地面真相。在本文件中,我们介绍了两个新的点云图一致性损失,这有助于自我监督地表深度校正模型的真实数据。具体地说,模型利用不同视野点对同一场的多点云度测量,以便学习如何减少基于构建的地图一致性信号的偏差。为从测量中消除偏差提供了补充,我们证明深度校正模型有助于减少本地化的真相流。此外,我们用一个本地的云层绘制数据集。</s>