Point cloud registration is a fundamental and challenging problem for autonomous robots interacting in unstructured environments for applications such as object pose estimation, simultaneous localization and mapping, robot-sensor calibration, and so on. In global correspondence-based point cloud registration, data association is a highly brittle task and commonly produces high amounts of outliers. Failure to reject outliers can lead to errors propagating to downstream perception tasks. Maximum Consensus (MC) is a widely used technique for robust estimation, which is however known to be NP-hard. Exact methods struggle to scale to realistic problem instances, whereas high outlier rates are challenging for approximate methods. To this end, we propose Graph-based Maximum Consensus Registration (GMCR), which is highly robust to outliers and scales to realistic problem instances. We propose novel consensus functions to map the decoupled MC-objective to the graph domain, wherein we find a tight approximation to the maximum consensus set as the maximum clique. The final pose estimate is given in closed-form. We extensively evaluated our proposed GMCR on a synthetic registration benchmark, robotic object localization task, and additionally on a scan matching benchmark. Our proposed method shows high accuracy and time efficiency compared to other state-of-the-art MC methods and compares favorably to other robust registration methods.


翻译:对于在非结构化环境中相互作用的自主机器人来说,云层登记是一个根本性的、具有挑战性的问题,例如物体构成估计、同步本地化和绘图、机器人感应器校准等应用程序。在全球通信基点云层登记中,数据关联是一项高度易碎的任务,通常会产生大量的外部线。不拒绝外部线可能会导致向下游认知任务传播错误。最大共识(MC)是一种广泛使用的可靠估算技术,尽管人们知道这种技术是硬性(NP-硬性) 。精确度方法难以达到现实的问题情况,而近似方法则难以达到高超速率。为此,我们提议基于图表的最大共识注册(GMCR)注册(GMCR)(GMCR)(GCR)(GCR)(GCR) (GCR) (GMCR) (GCR) (GCR) (GMCR) (GMCR) (GC) (GMC) (GC) (GCR) (GC) (GC) (GCR) (GC) (GC) (GCR) (GMCR) (GC) (GC) (GMC) (GC) (GC) (GMC) (GMC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (G) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (GC) (这个非常强度(GC),, ) (GC) (GC) (GC) ) (G) (G) (G) (G) (G), ),,,, ) (GC) (GC) (GC) (GC) ) (GP),,, ),,, ) (对外部定位(GCR),,,,, 和高超强度,,,,, </s>

0
下载
关闭预览

相关内容

强化学习最新教程,17页pdf
专知会员服务
167+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
99+阅读 · 2019年10月9日
GNN 新基准!Long Range Graph Benchmark
图与推荐
0+阅读 · 2022年10月18日
【泡泡汇总】CVPR2019 SLAM Paperlist
泡泡机器人SLAM
14+阅读 · 2019年6月12日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
逆强化学习-学习人先验的动机
CreateAMind
15+阅读 · 2019年1月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
【SIGIR2018】五篇对抗训练文章
专知
12+阅读 · 2018年7月9日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Arxiv
27+阅读 · 2020年12月24日
VIP会员
相关资讯
GNN 新基准!Long Range Graph Benchmark
图与推荐
0+阅读 · 2022年10月18日
【泡泡汇总】CVPR2019 SLAM Paperlist
泡泡机器人SLAM
14+阅读 · 2019年6月12日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
逆强化学习-学习人先验的动机
CreateAMind
15+阅读 · 2019年1月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
disentangled-representation-papers
CreateAMind
26+阅读 · 2018年9月12日
【SIGIR2018】五篇对抗训练文章
专知
12+阅读 · 2018年7月9日
相关基金
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Top
微信扫码咨询专知VIP会员