The ability to detect and segment moving objects in a scene is essential for building consistent maps, making future state predictions, avoiding collisions, and planning. In this paper, we address the problem of moving object segmentation from 3D LiDAR scans. We propose a novel approach that pushes the current state of the art in LiDAR-only moving object segmentation forward to provide relevant information for autonomous robots and other vehicles. Instead of segmenting the point cloud semantically, i.e., predicting the semantic classes such as vehicles, pedestrians, roads, etc., our approach accurately segments the scene into moving and static objects, i.e., also distinguishing between moving cars vs. parked cars. Our proposed approach exploits sequential range images from a rotating 3D LiDAR sensor as an intermediate representation combined with a convolutional neural network and runs faster than the frame rate of the sensor. We compare our approach to several other state-of-the-art methods showing superior segmentation quality in urban environments. Additionally, we created a new benchmark for LiDAR-based moving object segmentation based on SemanticKITTI. We published it to allow other researchers to compare their approaches transparently and we furthermore published our code.
翻译:对于绘制一致的地图、作出未来状态预测、避免碰撞和规划,探测和分解在现场移动物体的能力至关重要。在本文件中,我们处理从3D LiDAR扫描移动3D LiDAR物体分割的问题。我们建议采用新的方法,将LiDAR唯一移动物体分割部分的当前状态推向LiDAR的唯一移动物体分割部分,为自主机器人和其他飞行器提供相关信息。我们没有将点云的语义分解,即预测诸如车辆、行人、道路等的语义类,我们的方法准确地将场景分为移动和静态物体,也就是说,还要区分移动汽车与停放的汽车。我们提议的方法利用3DLIDAR传感器的相继范围图像作为旋转3DLIDAR传感器的中间显示器,与卷动神经网络和其他飞行器的框架速率相结合。我们将我们的方法与其他显示城市环境中高分解质量的状态方法进行比较。此外,我们为基于LIDAR移动物体分割部分的物体分割部分与停放汽车之间的区分部分,也区分出一个新的基准。我们提出的方法允许通过Smantictical IP 来比较其他研究人员。我们出版的代码。