The SLAM system built on the static scene assumption will introduce significant estimation errors when a large number of moving objects appear in the field of view. Tracking and maintaining semantic objects is beneficial to understand the scene and provide rich decision information for planning and control modules. This paper introduces MLO, a multi-object Lidar odometry which tracks ego-motion and movable objects with only the lidar sensor. First, it achieves information extraction of foreground movable objects, surface road, and static background features based on geometry and object fusion perception module. While robustly estimating ego-motion, it accomplishes multi-object tracking through the least-squares method fused by 3D bounding boxes and geometric point clouds. Then, a continuous 4D semantic object map on the timeline can be created. Our approach is evaluated qualitatively and quantitatively under different scenarios on the public KITTI dataset. The experiment results show that the ego localization accuracy of MLO is better than A-LOAM system in highly dynamic, unstructured, and unknown semantic scenes. Meanwhile, the multi-object tracking method with semantic-geometry fusion also has apparent advantages in accuracy and tracking robustness compared with the single method.
翻译:以静态场景假设为基础的SLMM系统将引入大量移动物体出现在视野领域的大量估计错误; 跟踪和维护语义物体有助于了解场景,并为规划和控制模块提供丰富的决策信息; 本文介绍多物体LO, 一种多物体Lidar odo测量仪, 跟踪仅使用里达传感器的自动和移动物体。 首先, 该系统将基于几何和物体聚合感知模块的地表移动物体、 地表道路和静态背景特征的信息提取出来。 它在大力估计自我动作的同时, 完成通过由 3D 边框和几何点云结合的最小方位方法进行的多物体跟踪。 然后, 可以在时间线上绘制一个连续的 4D 语义天体天体天体图。 我们的方法是在公共 KITTI 数据集的不同情景下进行定性和定量评估。 实验结果显示, 在高度动态、 不结构化和未知的语义场景中, MLOLO的自我定位精确度比 A- LOAM 系统要好。 同时, 多点追踪方法与精度比单一精确度的单方法。