The ability to detect and segment moving objects in a scene is essential for building consistent maps, making future state predictions, avoiding collisions, and planning. In this paper, we address the problem of moving object segmentation from 3D LiDAR scans. We propose a novel approach that pushes the current state of the art in LiDAR-only moving object segmentation forward to provide relevant information for autonomous robots and other vehicles. Instead of segmenting the point cloud semantically, i.e., predicting the semantic classes such as vehicles, pedestrians, buildings, roads, etc., our approach accurately segments the scene into moving and static objects, i.e., distinguishing between moving cars vs. parked cars. Our proposed approach exploits sequential range images from a rotating 3D LiDAR sensor as an intermediate representation combined with a convolutional neural network and runs faster than the frame rate of the sensor. We compare our approach to several other state-of-the-art methods showing superior segmentation quality in urban environments. Additionally, we created a new benchmark for LiDAR-based moving object segmentation based on SemanticKITTI. We publish it to allow other researchers to compare their approaches transparently and we will publish our code.
翻译:对于绘制一致的地图、作出未来状态预测、避免碰撞和规划,探测和分解在现场移动物体的能力至关重要。在本文件中,我们处理从3D LiDAR扫描移动3D LiDAR物体分割的问题。我们提出一种新颖的方法,将LiDAR唯一移动物体分割的目前状态推向LiDAR的先进状态,以便向自主机器人和其他飞行器提供相关信息。我们没有将点云平面分解,而是将点云平面平面平面平面平面平面平面,例如预测车辆、行人、建筑、道路等,我们的方法准确地将场平面平面平面平面平面平面平面平面平面,即区分移动汽车和停放汽车。我们拟议的方法利用3DLIDAR传感器的连续范围图像作为旋转的中间代表,与卷心神经网络和其他飞行器的运行速度更快。我们将我们的方法与其他显示城市环境中高分化质量的状态方法进行比较。此外,我们为基于LIDAR移动物体分割的物体分割面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面,在SmanKIT的研究人员上,我们将公布其他的方法。我们允许其他的平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面。我们出版。我们出版。我们出版。我们出版其他的平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平面平基。我们出版。我们出版。