Exploiting internal spatial geometric constraints of sparse LiDARs is beneficial to depth completion, however, has been not explored well. This paper proposes an efficient method to learn geometry-aware embedding, which encodes the local and global geometric structure information from 3D points, e.g., scene layout, object's sizes and shapes, to guide dense depth estimation. Specifically, we utilize the dynamic graph representation to model generalized geometric relationship from irregular point clouds in a flexible and efficient manner. Further, we joint this embedding and corresponded RGB appearance information to infer missing depths of the scene with well structure-preserved details. The key to our method is to integrate implicit 3D geometric representation into a 2D learning architecture, which leads to a better trade-off between the performance and efficiency. Extensive experiments demonstrate that the proposed method outperforms previous works and could reconstruct fine depths with crisp boundaries in regions that are over-smoothed by them. The ablation study gives more insights into our method that could achieve significant gains with a simple design, while having better generalization capability and stability. The code is available at https://github.com/Wenchao-Du/GAENet.
翻译:然而,本文没有提出一个有效的方法,从三维点(如场景布局、天体大小和形状)将本地和全球几何结构信息编码成三维点(如场景布局、天体大小和形状),以指导深度估算。具体地说,我们利用动态图形表示法,以灵活和高效的方式,从非正常点云层中模拟普遍几何关系。此外,我们用结构良好的细节将这种嵌入RGB外观信息与RGB外观信息相匹配,以推断出场景缺失深度。我们的方法的关键是将隐含的三维几何表示法纳入二维学习结构,从而在性能和效率之间实现更好的取舍。广泛的实验表明,拟议的方法超越了以前的工程,可以在过度覆盖的区域用精确的边界重建精细的深度。通度研究使我们的方法有了更多的洞察力,通过简单的设计可以取得显著的收益,同时具有更好的通用/稳定能力。在 http://www. http://www.