With the advent of powerful, light-weight 3D LiDARs, they have become the hearth of many navigation and SLAM algorithms on various autonomous systems. Pointcloud registration methods working with unstructured pointclouds such as ICP are often computationally expensive or require a good initial guess. Furthermore, 3D feature-based registration methods have never quite reached the robustness of 2D methods in visual SLAM. With the continuously increasing resolution of LiDAR range images, these 2D methods not only become applicable but should exploit the illumination-independent modalities that come with it, such as depth and intensity. In visual SLAM, deep learned 2D features and descriptors perform exceptionally well compared to traditional methods. In this publication, we use a state-of-the-art 2D feature network as a basis for 3D3L, exploiting both intensity and depth of LiDAR range images to extract powerful 3D features. Our results show that these keypoints and descriptors extracted from LiDAR scan images outperform state-of-the-art on different benchmark metrics and allow for robust scan-to-scan alignment as well as global localization.
翻译:随着强大的、轻型的 3D 激光雷达的出现,这些2D 方法不仅可以应用,而且应该利用随之而来的光化独立模式,例如深度和强度。在视觉的 SLAM 中,与比较方案等非结构化的点块一起工作的Pointcloud 登记方法往往在计算上非常昂贵或需要良好的初步猜测。此外,基于 3D 特性的登记方法在视觉的 SLAM 中从未达到 2D 方法的稳健性。随着LIDAR 射程图像的分辨率不断提高,这些2DAR 方法不仅可以应用,而且应该利用随之产生的光化独立模式,例如深度和强度。在视觉的 SLAM 中,与传统方法相比,深学的2D 特征和解记器表现非常出色。在这个出版物中,我们使用一个状态的2DD 特征网络作为3D3L 的基础,利用LDAR 射程图的强度和深度来提取强大的3D 。我们的结果表明,从LDAR 扫描图像中提取的这些关键点和描述符标示器超越了不同基准基准基准度的状态,并允许全球以及全球的精确的同步对准。