3D laser scanning by LiDAR sensors plays an important role for mobile robots to understand their surroundings. Nevertheless, not all systems have high resolution and accuracy due to hardware limitations, weather conditions, and so on. Generative modeling of LiDAR data as scene priors is one of the promising solutions to compensate for unreliable or incomplete observations. In this paper, we propose a novel generative model for learning LiDAR data based on generative adversarial networks. As in the related studies, we process LiDAR data as a compact yet lossless representation, a cylindrical depth map. However, despite the smoothness of real-world objects, many points on the depth map are dropped out through the laser measurement, which causes learning difficulty on generative models. To circumvent this issue, we introduce measurement uncertainty into the generation process, which allows the model to learn a disentangled representation of the underlying shape and the dropout noises from a collection of real LiDAR data. To simulate the lossy measurement, we adopt a differentiable sampling framework to drop points based on the learned uncertainty. We demonstrate the effectiveness of our method on synthesis and reconstruction tasks using two datasets. We further showcase potential applications by restoring LiDAR data with various types of corruption.
翻译:LiDAR 传感器的3D激光扫描为移动机器人了解其周围环境发挥了重要作用。 然而,并非所有系统都由于硬件限制、天气条件等原因而具有很高的分辨率和准确性。将LIDAR数据建模作为现场前置,是弥补不可靠或不完整观测的有希望的解决办法之一。在本文中,我们提出了一个学习LIDAR数据的新型基因模型,以基于基因对抗网络学习LIDAR数据。正如在相关研究中一样,我们处理LIDAR数据,将其作为一个紧凑但无损失的代言词,一个圆柱形深度地图。然而,尽管真实世界天体物体平滑,但深度地图上的许多点却通过激光测量而丢弃,这在基因模型上造成学习困难。为了绕开这个问题,我们将测量不确定性引入生成过程,使模型能够从真实的LIDAR数据收集中了解一个分解的形状和排泄噪声的表示。为了模拟损失计量,我们采用了一个不同的取样框架,根据所学的不确定性进一步下降点。我们用两种数据类型来展示我们的合成和重建方法的有效性。我们用激光来恢复各种数据。