We present a method for inferring dense depth maps from images and sparse depth measurements by leveraging synthetic data to learn the association of sparse point clouds with dense natural shapes, and using the image as evidence to validate the predicted depth map. Our learned prior for natural shapes uses only sparse depth as input, not images, so the method is not affected by the covariate shift when attempting to transfer learned models from synthetic data to real ones. This allows us to use abundant synthetic data with ground truth to learn the most difficult component of the reconstruction process, which is topology estimation, and use the image to refine the prediction based on photometric evidence. Our approach uses fewer parameters than previous methods, yet, achieves the state of the art on both indoor and outdoor benchmark datasets. Code available at: https://github.com/alexklwong/learning-topology-synthetic-data.
翻译:我们提出一种方法,从图像和稀有深度测量中推断密集深度地图,方法是利用合成数据学习与密集自然形状有关的稀有点云层,并利用图像作为证据来验证预测深度地图。我们所学的自然形状之前的自然形状仅使用稀有深度作为输入,而不是图像,因此这种方法在试图将从合成数据学模型转换为真实数据时不会受到共变变化的影响。这使我们能够利用大量具有地面真理的合成数据来了解重建进程中最困难的部分,即地形估计,并利用图像来根据光度证据改进预测。我们的方法使用比以往方法少的参数,但是在室内和室外基准数据集中都达到最新水平。 代码可在以下网址查阅:https://github.com/alexklwong/lechnicing-toticlog-synthetic-data。