We present a novel method to reconstruct 3D scenes from images by leveraging deep dense monocular SLAM and fast uncertainty propagation. The proposed approach is able to 3D reconstruct scenes densely, accurately, and in real-time while being robust to extremely noisy depth estimates coming from dense monocular SLAM. Differently from previous approaches, that either use ad-hoc depth filters, or that estimate the depth uncertainty from RGB-D cameras' sensor models, our probabilistic depth uncertainty derives directly from the information matrix of the underlying bundle adjustment problem in SLAM. We show that the resulting depth uncertainty provides an excellent signal to weight the depth-maps for volumetric fusion. Without our depth uncertainty, the resulting mesh is noisy and with artifacts, while our approach generates an accurate 3D mesh with significantly fewer artifacts. We provide results on the challenging Euroc dataset, and show that our approach achieves 92% better accuracy than directly fusing depths from monocular SLAM, and up to 90% improvements compared to the best competing approach.
翻译:我们展示了一种创新的方法,通过利用深密的单质单质 SLM 和快速的不确定性传播,从图像中重建三维场景。 拟议的方法能够让三维重新构建快速、准确和实时的场景,同时对密度单质 SLAM 的深度估计非常强烈。 不同于以往的方法,即使用临时的深度过滤器,或估计RGB-D相机传感器模型的深度不确定性,我们的概率深度不确定性直接来自SLAM 中潜在捆绑调整问题的信息矩阵。 我们表明,由此形成的深度不确定性为体积融合的深度测深提供了极好的信号。 没有深度的不确定性,由此产生的网状会噪音,而且与文物有关,而我们的方法产生了准确的三维网格,而手工艺数量要少得多。 我们在具有挑战性的欧洲域数据集上提供了结果,并表明我们的方法比单质的SLAM 直接阻燃深度高出92%的准确度,并且比最佳的竞算方法改进了90%。