Scene completion refers to obtaining dense scene representation from an incomplete perception of complex 3D scenes. This helps robots detect multi-scale obstacles and analyse object occlusions in scenarios such as autonomous driving. Recent advances show that implicit representation learning can be leveraged for continuous scene completion and achieved through physical constraints like Eikonal equations. However, former Eikonal completion methods only demonstrate results on watertight meshes at a scale of tens of meshes. None of them are successfully done for non-watertight LiDAR point clouds of open large scenes at a scale of thousands of scenes. In this paper, we propose a novel Eikonal formulation that conditions the implicit representation on localized shape priors which function as dense boundary value constraints, and demonstrate it works on SemanticKITTI and SemanticPOSS. It can also be extended to semantic Eikonal scene completion with only small modifications to the network architecture. With extensive quantitative and qualitative results, we demonstrate the benefits and drawbacks of existing Eikonal methods, which naturally leads to the new locally conditioned formulation. Notably, we improve IoU from 31.7% to 51.2% on SemanticKITTI and from 40.5% to 48.7% on SemanticPOSS. We extensively ablate our methods and demonstrate that the proposed formulation is robust to a wide spectrum of implementation hyper-parameters. Codes and models are publicly available at https://github.com/AIR-DISCOVER/LODE.
翻译:完成过程是指从对复杂的 3D 场景的不完整认识中获取密集的场景代表。 这有助于机器人在自主驾驶等场景中发现多级障碍,分析物体隔离。 最近的进展显示, 隐含的代表性学习可以用于连续完成场景, 并通过Eikonal 等方程式等物理限制实现。 但是, 前 Eikonal 的完成方法只能以数十个间歇体的规模显示水紧的网状结果。 由于定量和定性结果广泛, 我们没有成功为非水紧的LiDAR 点点云, 以千幅规模的场景为开放的大场景。 在本文中, 我们提出了一个新型的 Eikonalal 场阵列配方程式, 将隐含的场面显示在局部形状上, 以密集的边界值限制为功能, 并展示其在Smanical 场景中的工作效果。 我们提出的IOU从31. 7% 至 51.2% Sekomel- semali- rodual 模型, 演示我们提出的Se- semali- roumal rodude rodude 的Se- rouple- suple a preal roduction a produstrations a procultmational rocultmations</s>