Our long term goal is to use image-based depth completion to quickly create 3D models from sparse point clouds, e.g. from SfM or SLAM. Much progress has been made in depth completion. However, most current works assume well distributed samples of known depth, e.g. Lidar or random uniform sampling, and perform poorly on uneven samples, such as from keypoints, due to the large unsampled regions. To address this problem, we extend CSPN with multiscale prediction and a dilated kernel, leading to much better completion of keypoint-sampled depth. We also show that a model trained on NYUv2 creates surprisingly good point clouds on ETH3D by completing sparse SfM points.
翻译:我们的长期目标是利用基于图像的深度完成,迅速从稀少的云层(如SfM或SLAM)创建3D模型。在深度完成方面已经取得了很大进展。然而,目前大多数工程都假定了广为分布的已知深度样本,如Lidar或随机统一抽样,并且由于大片未采样区域,从关键点等不同样本上表现不佳。为了解决这一问题,我们通过多尺度的预测和膨胀的内核来扩展CSPN,从而更好地完成关键点采样的深度。我们还表明,通过完成稀有的SfM点,一个经过培训的NYUv2模型在ETH3D上创造了出乎意料的好点云。