This paper presents a novel CNN-based approach for synthesizing high-resolution LiDAR point cloud data. Our approach generates semantically and perceptually realistic results with guidance from specialized loss-functions. First, we utilize a modified per-point loss that addresses missing LiDAR point measurements. Second, we align the quality of our generated output with real-world sensor data by applying a perceptual loss. In large-scale experiments on real-world datasets, we evaluate both the geometric accuracy and semantic segmentation performance using our generated data vs. ground truth. In a mean opinion score testing we further assess the perceptual quality of our generated point clouds. Our results demonstrate a significant quantitative and qualitative improvement in both geometry and semantics over traditional non CNN-based up-sampling methods.
翻译:本文介绍了基于CNN的新型方法,用于合成高分辨率的LIDAR点云数据。我们的方法在专门损失函数的指导下,在语义和概念上产生现实的结果。首先,我们使用经修改的点损失,以解决缺失的LIDAR点测量问题。第二,我们通过应用一种概念损失,将我们产生的输出质量与现实世界传感器数据相匹配。在对现实世界数据集的大规模实验中,我们利用我们生成的数据对地面真理进行了几何精确度和语义分解性表现评估。在一种中值评测试中,我们进一步评估了我们生成的点云的感知质量。我们的结果表明,相对于传统的非CNN非采样方法,我们的几何和语义在数量和质量上都有很大改进。