Moving object segmentation (MOS) is a task to distinguish moving objects, e.g., moving vehicles and pedestrians, from the surrounding static environment. The segmentation accuracy of MOS can have an influence on odometry, map construction, and planning tasks. In this paper, we propose a semantics-guided convolutional neural network for moving object segmentation. The network takes sequential LiDAR range images as inputs. Instead of segmenting the moving objects directly, the network conducts single-scan-based semantic segmentation and multiple-scan-based moving object segmentation in turn. The semantic segmentation module provides semantic priors for the MOS module, where we propose an adjacent scan association (ASA) module to convert the semantic features of adjacent scans into the same coordinate system to fully exploit the cross-scan semantic features. Finally, by analyzing the difference between the transformed features, reliable MOS result can be obtained quickly. Experimental results on the SemanticKITTI MOS dataset proves the effectiveness of our work.
翻译:移动对象分隔( MOS) 是区分移动对象( 例如移动车辆和行人) 和周围静态环境的任务。 移动物体分隔( MOS) 的分解精度可以对观察测量、 地图构造和规划任务产生影响 。 在本文中, 我们提出一个用于移动物体分隔的语义制导进动神经神经网络 。 网络使用连续的 LiDAR 范围图像作为输入。 网络不是直接分割移动对象, 而是进行单扫描式的分解和多扫描式的移动对象分割 。 语义分解模块提供了MOS 模块的语义前缀, 我们在此模块中提出一个相邻的扫描关联( ASA) 模块, 将相邻扫描的语义特性转换为同一协调系统, 以充分利用交叉扫描的语义特征 。 最后, 通过分析变异的特征, 能够很快获得可靠的 MOS 结果 。 语义化- kITTI MOS 数据集的实验结果证明了我们的工作效果 。