Point cloud registration is an important task in robotics and autonomous driving to estimate the ego-motion of the vehicle. Recent advances following the coarse-to-fine manner show promising potential in point cloud registration. However, existing methods rely on good superpoint correspondences, which are hard to be obtained reliably and efficiently, thus resulting in less robust and accurate point cloud registration. In this paper, we propose a novel network, named RDMNet, to find dense point correspondences coarse-to-fine and improve final pose estimation based on such reliable correspondences. Our RDMNet uses a devised 3D-RoFormer mechanism to first extract distinctive superpoints and generates reliable superpoints matches between two point clouds. The proposed 3D-RoFormer fuses 3D position information into the transformer network, efficiently exploiting point clouds' contextual and geometric information to generate robust superpoint correspondences. RDMNet then propagates the sparse superpoints matches to dense point matches using the neighborhood information for accurate point cloud registration. We extensively evaluate our method on multiple datasets from different environments. The experimental results demonstrate that our method outperforms existing state-of-the-art approaches in all tested datasets with a strong generalization ability.
翻译:点云配准是机器人和自动驾驶中重要的任务,用于估计车辆的自我运动。最近的方法采用从粗到细的方式,显示出在点云配准方面的潜在能力。但是,现有的方法依赖于良好的超点对应,这些对应难以可靠且高效地获取,从而导致点云配准的鲁棒性和准确性不足。本文提出了一种名为RDMNet的新型网络,它可以从粗到细地找到稠密点对应,并基于这些可靠的点对应来提高最终的姿态估计。我们的RDMNet首先使用设计的3D RoFormer机制提取出具有显著区别的超点,并生成两个点云之间的可靠超点匹配。所提出的3D RoFormer将3D位置信息融合到变换器网络中,有效地利用了点云的上下文信息和几何信息为生成强健的超点对应。之后,RDMNet使用邻域信息将稀疏的超点匹配传播到稠密的点匹配中,以实现精确的点云配准。我们在不同环境下的多个数据集上进行了广泛的评估。实验结果表明,我们的方法在所有测试数据集上优于现有的最先进方法,并具有强大的泛化能力。