每天一分钟,带你读遍机器人顶级会议文章
标题:Taking the Scenic Route to 3D: Optimising Reconstruction from Moving Cameras
作者:Oscar Mendez,Simon Hadfield ,Nicolas Pugeault ,Richard Bowden
来源:International Conference on Computer Vision (ICCV 2017)
编译:穆新鹏
审核:颜青松 陈世浪
欢迎个人转发朋友圈;其他机构或自媒体如需转载,后台留言申请授权
摘要
3D环境的重建是文献中广泛讨论的问题。 虽然存在许多进行重建的方法,但很少有方法主动决定下一次观察应该来自何处。 此外,从相机的当前位置行进到下一个位置(称为路径规划)的问题通常集中于最小化路径长度。这种方法不适合重建应用,因为此时了解环境比运动速度更重要。
我们提出了场景路线规划器,它可以根据总地图覆盖范围和重建精度选择最大化信息增益的路径。 我们还在规划阶段引入了一种称为机会协作的新型协作行为,它允许传感器在作为独立的运动结构(SfM)代理或作为可变基线立体对之间切换。
我们的研究表明,Scenic Planning可以使用少于0.00027%的潜在立体相对(3%的视图)实现与最先进的批处理方法类似的性能。 与基于长度的路径规划方法的比较表明,我们的方法使用更少的帧生成更完整和更准确的地图。 最后,我们展示了Scenic Pathplanner通过在自动地面传感器平台上安装摄像头和探索环境来适应一般生活场景的能力。
图1 不同算法的重建结果,可以明显发现本文的重建结果(d)更优
Abstract
Reconstruction of 3D environments is a problem that has been widely addressed in the literature. While many approaches exist to perform reconstruction, few of them take an active role in deciding where the next observations should come from. Furthermore, the problem of travelling from the camera’s current position to the next, known as
pathplanning, usually focuses on minimising path length. This approach is ill-suited for reconstruction applications, where learning about the environment is more valuable than speed of traversal.
We present a novel Scenic Route Planner that selects paths which maximise information gain, both in terms of total map coverage and reconstruction accuracy. We also introduce a new type of collaborative behaviour into the planning stage called opportunistic collaboration, which allows sensors to switch between acting as independent Structure from Motion (SfM) agents or as a variable baseline stereo pair.
We show that Scenic Planning enables similar performance to state-of-the-art batch approaches using less than 0.00027% of the possible stereo pairs (3% of the views). Comparison against length-based pathplanning approaches show that our approach produces more complete and more accurate maps with fewer frames. Finally, we demonstrate the Scenic Pathplanner’s ability to generalise to live scenarios by mounting cameras on autonomous ground-based sensor platforms and exploring an environment.
如果你对本文感兴趣,想要下载完整文章进行阅读,可以关注【泡泡机器人SLAM】公众号(paopaorobot_slam)。
欢迎来到泡泡论坛,这里有大牛为你解答关于SLAM的任何疑惑。
有想问的问题,或者想刷帖回答问题,泡泡论坛欢迎你!
泡泡网站:www.paopaorobot.org
泡泡论坛:http://paopaorobot.org/forums/
泡泡机器人SLAM的原创内容均由泡泡机器人的成员花费大量心血制作而成,希望大家珍惜我们的劳动成果,转载请务必注明出自【泡泡机器人SLAM】微信公众号,否则侵权必究!同时,我们也欢迎各位转载到自己的朋友圈,让更多的人能进入到SLAM这个领域中,让我们共同为推进中国的SLAM事业而努力!
商业合作及转载请联系liufuqiang_robot@hotmail.com