Combining lidar in camera-based simultaneous localization and mapping (SLAM) is an effective method in improving overall accuracy, especially at a large scale outdoor scenario. Recent development of low-cost lidars (e.g. Livox lidar) enable us to explore such SLAM systems with lower budget and higher performance. In this paper we propose CamVox by adapting Livox lidars into visual SLAM (ORB-SLAM2) by exploring the lidars' unique features. Based on the non-repeating nature of Livox lidars, we propose an automatic lidar-camera calibration method that will work in uncontrolled scenes. The long depth detection range also benefit a more efficient mapping. Comparison of CamVox with visual SLAM (VINS-mono) and lidar SLAM (LOAM) are evaluated on the same dataset to demonstrate the performance. We open sourced our hardware, code and dataset on GitHub.
翻译:最近开发的低成本激光雷达(例如Livox Lidar)使我们能够以较低的预算和更高的性能来探索这种小型激光雷达系统。在本文中,我们提议CamVox将Livox 激光雷达改造成视觉的SLAM(ORB-SLAM2),方法是探索这些激光雷达的独特性。根据Livox 激光雷达的不重复性质,我们提议一种在不受控制的场景中使用的自动激光摄像机校准方法。长深度探测范围也有利于更有效的绘图。CamVox与视觉的SLAM(VINS-mono)和Lidar SLAM(LOAM)的比较是在同一数据集上进行的评估,以显示其性能。我们打开了GitHub的硬件、代码和数据集。