Although point cloud registration has achieved remarkable advances in object-level and indoor scenes, large-scale registration methods are rarely explored. Challenges mainly arise from the huge point number, complex distribution, and outliers of outdoor LiDAR scans. In addition, most existing registration works generally adopt a two-stage paradigm: They first find correspondences by extracting discriminative local features, and then leverage estimators (eg. RANSAC) to filter outliers, which are highly dependent on well-designed descriptors and post-processing choices. To address these problems, we propose an end-to-end transformer network (RegFormer) for large-scale point cloud alignment without any further post-processing. Specifically, a projection-aware hierarchical transformer is proposed to capture long-range dependencies and filter outliers by extracting point features globally. Our transformer has linear complexity, which guarantees high efficiency even for large-scale scenes. Furthermore, to effectively reduce mismatches, a bijective association transformer is designed for regressing the initial transformation. Extensive experiments on KITTI and NuScenes datasets demonstrate that our RegFormer achieves state-of-the-art performance in terms of both accuracy and efficiency.
翻译:虽然点云配准在对象级和室内场景上已经取得了显著的进展,但是大规模配准方法很少被探索。主要的挑战来自于室外LiDAR扫描的庞大点数、复杂分布和异常值。此外,大多数现有的配准方法通常采用两阶段范式:首先通过提取有区分度的局部特征来寻找对应关系,然后利用估计器(例如RANSAC)来过滤异常值,这高度依赖于精心设计的描述符和后处理选择。为了解决这些问题,我们提出了一种用于大规模点云对齐的端到端Transformer网络(RegFormer),无需任何后处理。具体来说,我们提出了一种投影感知分层Transformer,通过全局提取点特征来捕捉长程依赖性和过滤异常值。我们的Transformer具有线性复杂度,即使对于大规模场景也保证了高效率。此外,为了有效地减少不匹配,设计了一种双射关联Transformer来回归初始变换。在KITTI和NuScenes数据集上进行的广泛实验证明,我们的RegFormer在精度和效率方面均达到了最先进的水平。