3D LiDAR sensors are indispensable for the robust vision of autonomous mobile robots. However, deploying LiDAR-based perception algorithms often fails due to a domain gap from the training environment, such as inconsistent angular resolution and missing properties. Existing studies have tackled the issue by learning inter-domain mapping, while the transferability is constrained by the training configuration and the training is susceptible to peculiar lossy noises called ray-drop. To address the issue, this paper proposes a generative model of LiDAR range images applicable to the data-level domain transfer. Motivated by the fact that LiDAR measurement is based on point-by-point range imaging, we train an implicit image representation-based generative adversarial networks along with a differentiable ray-drop effect. We demonstrate the fidelity and diversity of our model in comparison with the point-based and image-based state-of-the-art generative models. We also showcase upsampling and restoration applications. Furthermore, we introduce a Sim2Real application for LiDAR semantic segmentation. We demonstrate that our method is effective as a realistic ray-drop simulator and outperforms state-of-the-art methods.
翻译:3D LiDAR 传感器对于自主移动机器人的稳健愿景是不可或缺的。 然而,部署基于 LiDAR 的感知算法往往由于培训环境的域际差距而失败,例如不连贯的角分辨率和缺失的属性。 现有研究通过学习跨域映射来解决这个问题,而培训的可转移性受到培训配置的制约,而培训容易受到被称为光滴的特殊丢失噪音的影响。 为了解决这个问题,本文件提议了一个适用于数据域传输的LIDAR范围图像的基因化模型。由于LIDAR测量基于点对点射程成像,我们培养了基于图像的隐含式基因对抗网络,并产生了不同的射线滴效果。我们展示了我们的模型与基于点的和基于图像的状态的基因化模型相比的忠性和多样性。 我们还展示了用于数据级域域传输的扫描和恢复应用。 此外,我们引入了一个Sim2Real应用软件用于LDAR mantical 。我们展示了我们的方法作为现实的光谱式样模模模外的系统方法是有效的。