We propose a methodology for robust, real-time place recognition using an imaging lidar, which yields image-quality high-resolution 3D point clouds. Utilizing the intensity readings of an imaging lidar, we project the point cloud and obtain an intensity image. ORB feature descriptors are extracted from the image and encoded into a bag-of-words vector. The vector, used to identify the point cloud, is inserted into a database that is maintained by DBoW for fast place recognition queries. The returned candidate is further validated by matching visual feature descriptors. To reject matching outliers, we apply PnP, which minimizes the reprojection error of visual features' positions in Euclidean space with their correspondences in 2D image space, using RANSAC. Combining the advantages from both camera and lidar-based place recognition approaches, our method is truly rotation-invariant, and can tackle reverse revisiting and upside down revisiting. The proposed method is evaluated on datasets gathered from a variety of platforms over different scales and environments. Our implementation and datasets are available at https://git.io/image-lidar
翻译:我们提出一种方法,用成像 Lidar 进行稳健的实时位置识别,该成像仪产生高分辨率高分辨率的3D点云。 利用成像 Lidar 的强度读数, 我们投射点云, 并获得一个强度图像。 ORB 特征描述器从图像中提取, 并编码成一袋字词矢量。 该矢量用于识别点云, 被插入由 DBoW 维护的用于快速定位查询的数据库中。 返回的候选人通过匹配视觉特征描述符得到进一步的验证。 为了拒绝匹配外部线, 我们应用 PnP, 使用RANSAC, 将 Euclidean 空间的视觉特征位置与其在 2D 图像空间的通信的重新预测错误最小化 。 我们的方法是真正的旋转- 贪婪, 并可以解决反向回溯和向下向回调的问题。 提议的方法是在不同规模和环境的平台上收集的数据集。 我们的实施和数据集可以在 https://gilt. image- image- views 。