We propose a methodology for robust, real-time place recognition using an imaging lidar, which yields image-quality high-resolution 3D point clouds. Utilizing the intensity readings of an imaging lidar, we project the point cloud and obtain an intensity image. ORB feature descriptors are extracted from the image and encoded into a bag-of-words vector. The vector, used to identify the point cloud, is inserted into a database that is maintained by DBoW for fast place recognition queries. The returned candidate is further validated by matching visual feature descriptors. To reject matching outliers, we apply PnP, which minimizes the reprojection error of visual features' positions in Euclidean space with their correspondences in 2D image space, using RANSAC. Combining the advantages from both camera and lidar-based place recognition approaches, our method is truly rotation-invariant and can tackle reverse revisiting and upside-down revisiting. The proposed method is evaluated on datasets gathered from a variety of platforms over different scales and environments. Our implementation is available at https://git.io/imaging-lidar-place-recognition
翻译:我们提出一种方法,用成像 Lidar 进行稳健的实时位置识别, 产生高分辨率高分辨率的3D点云。 利用成像 Lidar 的强度读数, 我们投射点云, 并获得一个强度图像。 ORB 特征描述器从图像中提取, 并编码成一袋字词矢量。 矢量用于识别点云, 被插入由 DBoW 维护的用于快速定位查询的数据库中。 返回的候选人通过匹配视觉特征描述符得到进一步的验证。 为了拒绝匹配外部线, 我们应用 PnP, 使用RANSAC, 将 Euclidean 空间的视觉特征位置与其在 2D 图像空间的通信的重新预测错误最小化 。 将相机和基于 lidar 位置识别方法的优势结合起来, 我们的方法确实具有旋转性, 能够解决反向回调和反向下向回调的问题。 提议的方法是通过不同规模和环境的平台收集的数据集进行评估。 我们在 https://git.io/ imging- lidagedard- darplace- signation