Whilst the availability of 3D LiDAR point cloud data has significantly grown in recent years, annotation remains expensive and time-consuming, leading to a demand for semi-supervised semantic segmentation methods with application domains such as autonomous driving. Existing work very often employs relatively large segmentation backbone networks to improve segmentation accuracy, at the expense of computational costs. In addition, many use uniform sampling to reduce ground truth data requirements for learning needed, often resulting in sub-optimal performance. To address these issues, we propose a new pipeline that employs a smaller architecture, requiring fewer ground-truth annotations to achieve superior segmentation accuracy compared to contemporary approaches. This is facilitated via a novel Sparse Depthwise Separable Convolution module that significantly reduces the network parameter count while retaining overall task performance. To effectively sub-sample our training data, we propose a new Spatio-Temporal Redundant Frame Downsampling (ST-RFD) method that leverages knowledge of sensor motion within the environment to extract a more diverse subset of training data frame samples. To leverage the use of limited annotated data samples, we further propose a soft pseudo-label method informed by LiDAR reflectivity. Our method outperforms contemporary semi-supervised work in terms of mIoU, using less labeled data, on the SemanticKITTI (59.5@5%) and ScribbleKITTI (58.1@5%) benchmark datasets, based on a 2.3x reduction in model parameters and 641x fewer multiply-add operations whilst also demonstrating significant performance improvement on limited training data (i.e., Less is More).
翻译:随着3D LiDAR点云数据的大量增加,注释仍然是昂贵和耗时的,因此需要半监督语义分割方法,适用于自动驾驶等应用领域。现有的工作往往使用相对较大的分割骨干网络来提高分割精度,以代价换取计算成本。此外,许多使用均匀采样来降低学习所需的地面实况数据,往往导致次优性能。为了解决这些问题,我们提出了一种新的流程,通过一种新颖的稀疏深度可分离卷积模块,减少了模型参数数量,同时保持整体任务性能,从而需要更少的基准实况以实现优越的分割精度,相对于当代方法。为了有效地对训练数据进行子采样,我们提出了一种新的时空冗余帧下采样(ST-RFD)方法,利用环境中传感器运动的知识提取更多样化的训练数据帧样本。为了利用有限的注释数据样本,我们进一步提出了一种由LiDAR反射率提供信息的软伪标签方法。我们的方法相对于基准数据集 SemanticKITTI(59.5@5%)和ScribbleKITTI(58.1@5%)更少地使用标记数据,通过2.3x的模型参数减少和641倍的乘加运算,同时在有限的训练数据(即“The Less is More”)上表现出显著的性能提升。