Recovering a dense depth image from sparse LiDAR scans is a challenging task. Despite the popularity of color-guided methods for sparse-to-dense depth completion, they treated pixels equally during optimization, ignoring the uneven distribution characteristics in the sparse depth map and the accumulated outliers in the synthesized ground truth. In this work, we introduce uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion. Specifically, we propose an explicit uncertainty formulation for robust depth completion with Jeffrey's prior. A parametric uncertain-driven loss is introduced and translated to new loss functions that are robust to noisy or missing data. Meanwhile, we propose a multiscale joint prediction model that can simultaneously predict depth and uncertainty maps. The estimated uncertainty map is also used to perform adaptive prediction on the pixels with high uncertainty, leading to a residual map for refining the completion results. Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
翻译:从分散的 LiDAR 扫描中回收密密深度图像是一项艰巨的任务。 尽管色彩制导方法在稀疏到感知深度完成方面很受欢迎, 但他们在优化期间对像素一视同仁, 忽略了稀疏深度地图分布特点的不均衡和合成地面真相中累积的外星。 在这项工作中, 我们引入了不确定性驱动的损失功能, 以提高深度完成的稳健性, 并处理深度完成的不确定性。 具体地说, 我们提出了一个明确的不确定性配方, 以便与Jeffrey 之前一起进行稳健的深度完成。 引入了一个参数型不确定性驱动的损失, 并转化为对吵闹或缺失的数据具有强健性的新的损失功能。 同时, 我们提出了一个多尺度的联合预测模型, 可以同时预测深度和不确定性地图。 估计的不确定性地图还被用于对高度不确定的像素进行适应性预测, 从而导致绘制完成结果的剩余地图。 我们的方法在KITTI 深度完成基准上进行了测试, 并在MAE、 IMAE 和IME IMSE 等指标方面实现了最先进的强性表现。