How to extract significant point cloud features and estimate the pose between them remains a challenging question, due to the inherent lack of structure and ambiguous order permutation of point clouds. Despite significant improvements in applying deep learning-based methods for most 3D computer vision tasks, such as object classification, object segmentation and point cloud registration, the consistency between features is still not attractive in existing learning-based pipelines. In this paper, we present a novel learning-based alignment network for complex alignment scenes, titled deep feature consistency and consisting of three main modules: a multiscale graph feature merging network for converting the geometric correspondence set into high-dimensional features, a correspondence weighting module for constructing multiple candidate inlier subsets, and a Procrustes approach named deep feature matching for giving a closed-form solution to estimate the relative pose. As the most important step of the deep feature matching module, the feature consistency matrix for each inlier subset is constructed to obtain its principal vectors as the inlier likelihoods of the corresponding subset. We comprehensively validate the robustness and effectiveness of our approach on both the 3DMatch dataset and the KITTI odometry dataset. For large indoor scenes, registration results on the 3DMatch dataset demonstrate that our method outperforms both the state-of-the-art traditional and learning-based methods. For KITTI outdoor scenes, our approach remains quite capable of lowering the transformation errors. We also explore its strong generalization capability over cross-datasets.


翻译:如何提取显著的云云特征和估计它们之间的构成,仍是一个具有挑战性的问题,因为本来就缺乏结构,而且点云的排列也模糊不清。尽管在对大多数3D计算机视觉任务,如物体分类、物体分割和点云登记等应用深学习方法方面有了重大改进,但各特征之间的一致性在现有的基于学习的管道中仍然不具有吸引力。在本文件中,我们为复杂的调整场景提出了一个全新的基于学习的匹配网络,其特征具有深度一致性,由三个主要模块组成:一个用于将几何对应组转换为高维特征的多尺度图形特征合并网络,一个用于构建多个候选异端子集的对应加权模块,以及一个名为为估计相对面貌提供封闭式解决方案的深度特征匹配方法。作为深处匹配模块的最重要步骤,每个内端子集的特征一致性矩阵是为了获得其主要矢量作为相应子集的无限可能性。我们全面验证了我们在3DMatch数据集集和KITTI强度数据集集上采用的方法的稳健性和有效性。对于大室内端图像来说,也展示了我们传统的智能模型的升级方法。

0
下载
关闭预览

相关内容

根据激光测量原理得到的点云,包括三维坐标(XYZ)和激光反射强度(Intensity)。 根据摄影测量原理得到的点云,包括三维坐标(XYZ)和颜色信息(RGB)。 结合激光测量和摄影测量原理得到点云,包括三维坐标(XYZ)、激光反射强度(Intensity)和颜色信息(RGB)。 在获取物体表面每个采样点的空间坐标后,得到的是一个点的集合,称之为“点云”(Point Cloud)
专知会员服务
109+阅读 · 2020年3月12日
Stabilizing Transformers for Reinforcement Learning
专知会员服务
58+阅读 · 2019年10月17日
【泡泡汇总】CVPR2019 SLAM Paperlist
泡泡机器人SLAM
14+阅读 · 2019年6月12日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
视频超分辨 Detail-revealing Deep Video Super-resolution 论文笔记
统计学习与视觉计算组
17+阅读 · 2018年3月16日
Learning Blind Video Temporal Consistency
Arxiv
3+阅读 · 2018年8月1日
VIP会员
Top
微信扫码咨询专知VIP会员