Panoptic segmentation of point clouds is a crucial task that enables autonomous vehicles to comprehend their vicinity using their highly accurate and reliable LiDAR sensors. Existing top-down approaches tackle this problem by either combining independent task-specific networks or translating methods from the image domain ignoring the intricacies of LiDAR data and thus often resulting in sub-optimal performance. In this paper, we present the novel top-down Efficient LiDAR Panoptic Segmentation (EfficientLPS) architecture that addresses multiple challenges in segmenting LiDAR point clouds including distance-dependent sparsity, severe occlusions, large scale-variations, and re-projection errors. EfficientLPS comprises of a novel shared backbone that encodes with strengthened geometric transformation modeling capacity and aggregates semantically rich range-aware multi-scale features. It incorporates new scale-invariant semantic and instance segmentation heads along with the panoptic fusion module which is supervised by our proposed panoptic periphery loss function. Additionally, we formulate a regularized pseudo labeling framework to further improve the performance of EfficientLPS by training on unlabelled data. We benchmark our proposed model on two large-scale LiDAR datasets: nuScenes, for which we also provide ground truth annotations, and SemanticKITTI. Notably, EfficientLPS sets the new state-of-the-art on both these datasets.
翻译:点云的光谱分割是一项关键任务,使自主车辆能够利用高度准确和可靠的激光雷达传感器了解其周围位置。现有的自上而下的方法通过合并独立任务专用网络或从图像域转换方法来解决这个问题,忽略LIDAR数据的内在复杂性,从而往往导致亚优性性性能。在本文件中,我们介绍了新的自上而下高效的LIDAR光谱分割结构(EfficientLPS)结构,该结构处理在将LIDAR点云分割成多个挑战,包括远距离依赖的宽度、严重隐蔽、大比例变异和再预测错误。高效LPS包括一个新的共享主干骨,该主干将强化的几何转换模型能力与集成的精度丰富的范围观测多尺度性功能编码起来。它包含了新的规模变异性定分解结构(LDAR),以及由我们拟议的光学外围损失功能监督的全光谱融合模块。此外,我们制定了一个固定化的假标签框架,以进一步改进LPS的功能性LPS的功能,通过对无标签的两套高清晰度数据进行训练,我们为基准。我们关于这些高分辨率的地面图的模型的模型,我们为基准。