We propose a novel real-time LiDAR intensity image-based simultaneous localization and mapping method , which addresses the geometry degeneracy problem in unstructured environments. Traditional LiDAR-based front-end odometry mostly relies on geometric features such as points, lines and planes. A lack of these features in the environment can lead to the failure of the entire odometry system. To avoid this problem, we extract feature points from the LiDAR-generated point cloud that match features identified in LiDAR intensity images. We then use the extracted feature points to perform scan registration and estimate the robot ego-movement. For the back-end, we jointly optimize the distance between the corresponding feature points, and the point to plane distance for planes identified in the map. In addition, we use the features extracted from intensity images to detect loop closure candidates from previous scans and perform pose graph optimization. Our experiments show that our method can run in real time with high accuracy and works well with illumination changes, low-texture, and unstructured environments.
翻译:我们建议一种新型的实时LIDAR强度图像同步定位和绘图方法,该方法可以解决无结构环境中的几何分解问题。传统的LIDAR前端偏移法主要依赖点、线和平面等几何特征。环境中缺乏这些特征可能导致整个odograph系统失灵。为避免这一问题,我们从LIDAR产生的点云中提取符合LIDAR密度图像所发现特征的特征点。然后我们利用提取的特征点进行扫描登记并估计机器人自我移动。对于后端,我们共同优化相应的特征点之间的距离,以及地图上所识别的飞机的平面距离点。此外,我们利用从强度图像中提取的特征来探测以前的扫描中的循环封闭对象并进行方位图优化。我们的实验显示,我们的方法可以以高精度的精度实时运行,并且与污染的变化、低脂度和无结构的环境运作良好。