Safe motion planning in robotics requires planning into space which has been verified to be free of obstacles. However, obtaining such environment representations using lidars is challenging by virtue of the sparsity of their depth measurements. We present a learning-aided 3D lidar reconstruction framework that upsamples sparse lidar depth measurements with the aid of overlapping camera images so as to generate denser reconstructions with more definitively free space than can be achieved with the raw lidar measurements alone. We use a neural network with an encoder-decoder structure to predict dense depth images along with depth uncertainty estimates which are fused using a volumetric mapping system. We conduct experiments on real-world outdoor datasets captured using a handheld sensing device and a legged robot. Using input data from a 16-beam lidar mapping a building network, our experiments showed that the amount of estimated free space was increased by more than 40% with our approach. We also show that our approach trained on a synthetic dataset generalises well to real-world outdoor scenes without additional fine-tuning. Finally, we demonstrate how motion planning tasks can benefit from these denser reconstructions.
翻译:机器人安全运动规划需要规划进入空间,而这种规划已被证实是没有障碍的。然而,使用里达尔进行这种环境展示,由于深度测量的广度,具有挑战性。我们提出了一个学习辅助的3D里达尔重建框架,在重叠的摄像图像的帮助下,对稀有里达尔深度测量进行升级,以便产生比原始里达尔测量更明确、更自由的空间更密集的重建。我们使用一个神经网络,带有一个编码解码结构来预测密集的深度图像,以及利用体积绘图系统结合的深度不确定性估计值。我们用手持感测器和一个脚动机器人对真实世界室外数据集进行实验。我们利用16波里达尔绘制一个建筑网络的输入数据,实验显示我们的方法使估计自由空间的数量增加了40%以上。我们还表明,我们所训练的方法是合成数据集的概观,很好地进入现实世界户外的场,而没有进一步的微调。最后,我们展示了运动规划任务如何受益于这些稠密的重建。