In robotic applications, a key requirement for safe and efficient motion planning is the ability to map obstacle-free space in unknown, cluttered 3D environments. However, commodity-grade RGB-D cameras commonly used for sensing fail to register valid depth values on shiny, glossy, bright, or distant surfaces, leading to missing data in the map. To address this issue, we propose a framework leveraging probabilistic depth completion as an additional input for spatial mapping. We introduce a deep learning architecture providing uncertainty estimates for the depth completion of RGB-D images. Our pipeline exploits the inferred missing depth values and depth uncertainty to complement raw depth images and improve the speed and quality of free space mapping. Evaluations on synthetic data show that our approach maps significantly more correct free space with relatively low error when compared against using raw data alone in different indoor environments; thereby producing more complete maps that can be directly used for robotic navigation tasks. The performance of our framework is validated using real-world data.
翻译:在机器人应用中,安全和高效的移动规划的一个关键要求是能够在未知的3D环境内绘制无障碍空间图。然而,通常用于遥感的商品级 RGB-D 相机未能在闪亮、光亮、亮亮或遥远的表面登记有效的深度值,导致地图上缺少数据。为解决这一问题,我们提议了一个利用概率深度完成作为空间制图额外投入的框架。我们引入了一个深层次学习结构,为RGB-D图像的深度完成提供了不确定性估计。我们的管道利用了所推断的深度值和深度不确定性来补充原始深度图像,并提高了自由空间绘图的速度和质量。对合成数据的评价表明,我们的方法绘制了更加正确的自由空间图,与单独使用不同室内环境的原始数据相比,误差相对较低;从而产生了更完整的地图,可以直接用于机器人导航任务。我们框架的性能通过真实世界数据得到验证。