In this work, we present a new paradigm, called 4D-StOP, to tackle the task of 4D Panoptic LiDAR Segmentation. 4D-StOP first generates spatio-temporal proposals using voting-based center predictions, where each point in the 4D volume votes for a corresponding center. These tracklet proposals are further aggregated using learned geometric features. The tracklet aggregation method effectively generates a video-level 4D scene representation over the entire space-time volume. This is in contrast to existing end-to-end trainable state-of-the-art approaches which use spatio-temporal embeddings that are represented by Gaussian probability distributions. Our voting-based tracklet generation method followed by geometric feature-based aggregation generates significantly improved panoptic LiDAR segmentation quality when compared to modeling the entire 4D volume using Gaussian probability distributions. 4D-StOP achieves a new state-of-the-art when applied to the SemanticKITTI test dataset with a score of 63.9 LSTQ, which is a large (+7%) improvement compared to current best-performing end-to-end trainable methods. The code and pre-trained models are available at: https://github.com/LarsKreuzberg/4D-StOP.
翻译:在此工作中,我们展示了一个新的范式,名为 4D- StOP, 以完成 4D Panoptic LiDAR 分割 的任务。 4D- StOP 首次使用基于投票的中心预测生成时空建议, 4D 体积的每个点对相应中心进行4D体积的每个点的预测。 这些跟踪建议使用学习的几何特征进一步汇总。 轨道汇总方法在整个时空量中有效生成视频级别 4D 场景显示。 这与现有的端到端可训练状态- 艺术化的状态- 方法形成对照, 后者使用以高斯概率分布为代表的spattio- 时间嵌入。 我们基于投票的轨道生成方法, 之后以几何特性汇总为基础, 产生显著改善的全色LDAR分化质量。 与利用高斯概率分布来模拟整个4D体积时, 4D- StOP 在应用Smantict KITTI 测试数据集时, 新的状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 选项- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- / 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- / 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态- 状态-