We propose an accurate and robust multi-modal sensor fusion framework, MetroLoc, towards one of the most extreme scenarios, the large-scale metro vehicle localization and mapping. MetroLoc is built atop an IMU-centric state estimator that tightly couples light detection and ranging (LiDAR), visual, and inertial information with the convenience of loosely coupled methods. The proposed framework is composed of three submodules: IMU odometry, LiDAR-inertial odometry (LIO), and Visual-inertial odometry (VIO). The IMU is treated as the primary sensor, which achieves the observations from LIO and VIO to constrain the accelerometer and gyroscope biases. Compared to previous point-only LIO methods, our approach leverages more geometry information by introducing both line and plane features into motion estimation. The VIO also utilizes the environmental structure information by employing both lines and points. Our proposed method has been extensively tested in the long-during metro environments with a maintenance vehicle. Experimental results show the system more accurate and robust than the state-of-the-art approaches with real-time performance. Besides, we develop a series of Virtual Reality (VR) applications towards efficient, economical, and interactive rail vehicle state and trackside infrastructure monitoring, which has already been deployed to an outdoor testing railroad.
翻译:我们提出一个精确和稳健的多式传感器聚合框架,即MetroLoc,针对最极端的情景之一,即大型地铁车辆定位和绘图。MetroLoc建于一个以IMU为中心的IMU中心国家天象仪,该天象仪能够紧紧地对夫妇进行光探测和测距(LiDAR)、视觉和惯性信息,方便于松散的结合方法。拟议框架由三个子模块组成:IMU odoraty、LIDAR-肾上腺测量(LIO)和视觉地铁测量仪(VIO)。IMU被视为主要传感器,它实现LIO和VIO的观测,以限制加速仪和陀螺仪偏差。与以往的点光探测仪方法相比,我们的方法通过将线和飞机特征引入运动估计,利用环境结构信息。我们的拟议方法已经在长期地铁环环境中用维护器进行了广泛测试。 实验结果显示,LIO和VIO系统比实际的轨道测试更精确、更可靠地展示了我们真实、更精确、更精确的实地和更精确的轨道运行工具。