This study investigates the impact of the invariance of feature vectors for partial-to-partial point set registration under translation and rotation of input point sets, particularly in the realm of techniques based on deep learning and Gaussian mixture models (GMMs). We reveal both theoretical and practical problems associated with such deep-learning-based registration methods using GMMs, with a particular focus on the limitations of DeepGMR, a pioneering study in this line, to the partial-to-partial point set registration. Our primary goal is to uncover the causes behind such methods and propose a comprehensible solution for that. To address this, we introduce an attention-based reference point shifting (ARPS) layer, which robustly identifies a common reference point of two partial point sets, thereby acquiring transformation-invariant features. The ARPS layer employs a well-studied attention module to find a common reference point rather than the overlap region. Owing to this, it significantly enhances the performance of DeepGMR and its recent variant, UGMMReg. Furthermore, these extension models outperform even prior deep learning methods using attention blocks and Transformer to extract the overlap region or common reference points. We believe these findings provide deeper insights into registration methods using deep learning and GMMs.
翻译:本研究探讨了在输入点集发生平移和旋转时,特征向量不变性对部分-部分点集配准的影响,特别是在基于深度学习与高斯混合模型(GMMs)的技术范畴内。我们揭示了此类基于深度学习的GMM配准方法在理论与实践中存在的问题,并重点分析了该领域开创性研究DeepGMR在部分-部分点集配准中的局限性。我们的主要目标是揭示这些方法背后的根本原因,并提出一种易于理解的解决方案。为此,我们引入了一种基于注意力的参考点偏移(ARPS)层,该层能够鲁棒地识别两个部分点集的共同参考点,从而获得变换不变的特征。ARPS层采用经过深入研究的注意力模块来寻找共同参考点,而非重叠区域。正因如此,它显著提升了DeepGMR及其近期变体UGMMReg的性能。此外,这些扩展模型甚至超越了先前使用注意力块和Transformer来提取重叠区域或共同参考点的深度学习方法。我们相信这些发现为基于深度学习与GMMs的配准方法提供了更深入的见解。