An effective 3D descriptor should be invariant to different geometric transformations, such as scale and rotation, repeatable in the case of occlusions and clutter, and generalisable in different contexts when data is captured with different sensors. We present a simple but yet effective method to learn generalisable and distinctive 3D local descriptors that can be used to register point clouds captured in different contexts with different sensors. Point cloud patches are extracted, canonicalised with respect to their local reference frame, and encoded into scale and rotation-invariant compact descriptors by a point permutation-invariant deep neural network. Our descriptors can effectively generalise across sensor modalities from locally and randomly sampled points. We evaluate and compare our descriptors with alternative handcrafted and deep learning-based descriptors on several indoor and outdoor datasets reconstructed using both RGBD sensors and laser scanners. Our descriptors outperform most recent descriptors by a large margin in terms of generalisation, and become the state of the art also in benchmarks where training and testing are performed in the same scenarios.
翻译:有效的 3D 描述符应该不易于不同几何变换,例如规模和旋转,在封闭和杂乱的情况下可以重复使用,在用不同传感器采集数据时可以在不同情况下使用,在不同背景下使用不同传感器获取的点云记录,我们提出了一个简单但又有效的方法来学习通用和独特的三维局部描述符,可以用来记录在不同情况下通过不同传感器捕获的点云。点云条是提取的,可对其本地参照框架进行加固,并用一个点变异性深神经网络编码为规模和旋转-异性紧凑描述器。我们的描述符可以有效地从本地和随机抽样点对传感器模式进行概括。我们用不同的手制和深层学习的描述符对若干户外数据集进行了评估和比较,同时使用RGBD 传感器和激光扫描器对重建。我们的描述符在一般化方面大大超出最近的描述符,并成为在同一情景中进行训练和测试的基准中的艺术状态。