Segmentation of lidar data is a task that provides rich, point-wise information about the environment of robots or autonomous vehicles. Currently best performing neural networks for lidar segmentation are fine-tuned to specific datasets. Switching the lidar sensor without retraining on a big set of annotated data from the new sensor creates a domain shift, which causes the network performance to drop drastically. In this work we propose a new method for lidar domain adaption, in which we use annotated panoptic lidar datasets and recreate the recorded scenes in the structure of a different lidar sensor. We narrow the domain gap to the target data by recreating panoptic data from one domain in another and mixing the generated data with parts of (pseudo) labeled target domain data. Our method improves the nuScenes to SemanticKITTI unsupervised domain adaptation performance by 15.2 mean Intersection over Union points (mIoU) and by 48.3 mIoU in our semi-supervised approach. We demonstrate a similar improvement for the SemanticKITTI to nuScenes domain adaptation by 21.8 mIoU and 51.5 mIoU, respectively. We compare our method with two state of the art approaches for semantic lidar segmentation domain adaptation with a significant improvement for unsupervised and semi-supervised domain adaptation. Furthermore we successfully apply our proposed method to two entirely unlabeled datasets of two state of the art lidar sensors Velodyne Alpha Prime and InnovizTwo, and train well performing semantic segmentation networks for both.
翻译:lidar 数据分解是一项任务, 提供关于机器人或自主飞行器环境的丰富、 点知的信息。 目前, 最有效果的利达分解神经网络将精细调整为特定的数据集。 未经重新培训, 将利达传感器切换成由新传感器附加说明的一组大片数据, 造成域变换, 导致网络性能大幅下降。 在这项工作中, 我们提出一种新的利达域适应方法, 使用一个附加说明的全光象数据集, 并在不同的利达传感器结构中重建记录到的场景 。 我们通过从另一个域重新创建全光数据, 将生成的数据与( 假的) 标定目标域数据进行混合, 将生成的数据与部分( 假的) 相混合 。 我们的方法将 Nuscion KITTI 改进到 Smanti KITI 不受监控的域适应性能, 意味着在联盟点上的交叉路段上和48.3 mI 在半机密传感器结构结构结构中, 将SmantiatTITI 和两部域域网域系统系统进行顺利的调整, 我们用21.8 和内部系统系统进行两次对内部的系统进行系统进行系统进行系统的调整。