Self-supervised learning of egomotion and depth has recently attracted great attentions. These learning models can provide pose and depth maps to support navigation and perception task for autonomous driving and robots, while they do not require high-precision ground-truth labels to train the networks. However, monocular vision based methods suffer from pose scale-ambiguity problem, so that can not generate physical meaningful trajectory, and thus their applications are limited in real-world. We propose a novel self-learning deep neural network framework that can learn to estimate egomotion and depths with absolute metric scale from monocular images. Coarse depth scale is recovered via comparing point cloud data against a pretrained model that ensures the consistency of photometric loss. The scale-ambiguity problem is solved by introducing a novel two-stages coarse-to-fine scale recovery strategy that jointly refines coarse poses and depths. Our model successfully produces pose and depth estimates in global scale-metric, even in low-light condition, i.e. driving at night. The evaluation on the public datasets demonstrates that our model outperforms both representative traditional and learning based VOs and VIOs, e.g. VINS-mono, ORB-SLAM, SC-Learner, and UnVIO.
翻译:自我监督的自我感化和深度学习最近引起了极大的关注。 这些学习模型可以提供面貌和深度地图,以支持自主驾驶和机器人的导航和感知任务,而不需要高精度地面真相标签来训练网络。 但是,单镜视觉方法存在成比例的矛盾问题,因此无法产生具有实际意义的物理轨迹,因此其应用在现实世界中是有限的。我们提议了一个全新的自我学习深度神经网络框架,可以学习用单体图像绝对尺度来估计自我和深度。粗云深度尺度通过比较点云数据与确保光度损失一致性的预先培训模型而得到恢复。规模放大问题通过引入新型的两阶段粗度至纤维级复苏战略来解决,共同完善粗度的外表和深度。我们的模型成功地得出了全球规模测量的构成和深度估计,即使是在低光度条件下,也就是在夜间驱动。对公共数据集的评估表明,我们的模型超越了具有代表性的传统和学习基础的VI、VI-RM和VI-OR-VI。