We present Lepard, a Learning based approach for partial point cloud matching for rigid and deformable scenes. The key characteristic of Lepard is the following approaches that exploit 3D positional knowledge for point cloud matching: 1) An architecture that disentangles point cloud representation into feature space and 3D position space. 2) A position encoding method that explicitly reveals 3D relative distance information through the dot product of vectors. 3) A repositioning technique that modifies the cross-point-cloud relative positions. Ablation studies demonstrate the effectiveness of the above techniques. For rigid point cloud matching, Lepard sets a new state-of-the-art on the 3DMatch / 3DLoMatch benchmarks with 93.6% / 69.0% registration recall. In deformable cases, Lepard achieves +27.1% / +34.8% higher non-rigid feature matching recall than the prior art on our newly constructed 4DMatch / 4DLoMatch benchmark.
翻译:Lepard 是一个基于学习的局部点云比对僵硬和变形场景的方法。 Lepard 的关键特征是利用 3D 位置知识进行点云比对的下列方法:(1) 将点云表示分解为特性空间和 3D 位置空间的架构。(2) 位置编码方法,通过矢量的点产物明确显示 3D 相对距离信息。(3) 调整技术,改变交叉点宽相对位置。吸收研究显示上述技术的有效性。对于硬点云比对,Lepard 设定了3DMatch/ 3DLoMatch 基准的新状态,93.6%/69.0%的注册回溯。在可变案例中, Lepard 实现+27.1% / +34.8% 更高的非固定特性比我们新建的 4DMatch / 4DLoMatch 基准的先前技术回顾率。