Visual-inertial SLAM is essential in various fields, such as AR/VR, uncrewed aerial vehicles, industrial robots, and autonomous driving. The fusion of a camera and inertial measurement unit (IMU) can make up for the shortcomings of a signal sensor, which significantly improves the accuracy and robustness of localization in challenging environments. Robust tracking and accurate inertial parameter estimation are the basis for the stable operation of the system. This article presents PLE-SLAM, an entirely precise and real-time visual-inertial SLAM algorithm based on point-line features and efficient IMU initialization. First, we introduce line features in a point-based visual-inertial SLAM system. We use parallel computing methods to extract features and compute descriptors to ensure real-time performance. Second, the proposed system estimates gyroscope bias with rotation pre-integration and point and line observations. Accelerometer bias and gravity direction are solved by an analytical method. After initialization, all inertial parameters are refined through maximum a posteriori (MAP) estimation. Moreover, we open a dynamic feature elimination thread to improve the adaptability to dynamic environments and use CNN, bag-of-words and GNN to detect loops and match features. Excellent wide baseline matching capability of DNN-based matching method and illumination robustness significantly improve loop detection recall and loop inter-frame pose estimation. The front-end and back-end are designed for hardware acceleration. The experiments are performed on public datasets, and the results show that the proposed system is one of the state-of-the-art methods in complex scenarios.
翻译:暂无翻译