每天一分钟,带你读遍机器人顶级会议文章
标题:Information Sparsification in Visual-Inertial Odometry
作者:Jerry Hsiung, Ming Hsiao, Eric Westman, Rafael Valencia, and Michael Kaess
来源:International Conference on Intelligent Robots and Systems (IROS 2018)
编译:倪志鹏
审核:颜青松,陈世浪
欢迎个人转发朋友圈;其他机构或自媒体如需转载,后台留言申请授权
摘要
在本文中,我们提出了一种新方法,使用信息稀疏化在固定滞后视觉 - 惯性测距(VIO)框架中紧密耦合视觉和惯性测量。
为了限制计算复杂性,固定滞后平滑器通常将变量边缘化,但因此引入密集连接的线性先验,这显着降低了准确性和效率。
当前最先进的方法通过选择性地丢弃测量和边缘化附加变量来解决该问题。然而,从信息理论的角度来看,这种策略是次优的。相反,我们的方法执行密集的边缘化步骤并保留密集先验的信息内容。我们的方法通过最小化信息损失来使用非线性因子图稀疏密集先验。
由此产生的因子图保持信息稀疏性,结构相似性和非线性。为验证我们的方法,我们进行实时无人机测试,并与EuRoC视觉惯性数据集中当前最先进的固定滞后VIO方法进行比较。
实验结果表明,该方法几乎在所有试验中都具有竞争力和准确性。我们包括详细的运行时分析,以证明所提出的算法适用于实时应用程序。
图1 上图为所提方法的轨迹和EuRoC Vicon Room 2数据集的真实轨迹。结果表明,该算法实现了高精度的状态估计。
Abstract
In this paper, we present a novel approach to tightly couple visual and inertial measurements in a fixed-lag visual-inertial odometry (VIO) framework using information sparsification. To bound computational complexity, fixed-lag smoothers typically marginalize out variables, but consequently introduce a densely connected linear prior which significantly deteriorates accuracy and efficiency.
Current state-of-the-art approaches account for the issue by selectively discarding measurements and marginalizing additional variables.
However, such strategies are sub-optimal from an informationtheoretic perspective. Instead, our approach performs a dense marginalization step and preserves the information content of the dense prior. Our method sparsifies the dense prior with a nonlinear factor graph by minimizing the information loss. The resulting factor graph maintains information sparsity, structural similarity, and nonlinearity. To validate our approach,we conduct real-time drone tests and perform comparisons to current state-of-the-art fixed-lag VIO methods in the EuRoC visual-inertial dataset. The experimental results show that the proposed method achieves competitive and superior accuracy in almost all trials. We include a detailed run-time analysis to demonstrate that the proposed algorithm is suitable for realtime applications.
如果你对本文感兴趣,想要下载完整文章进行阅读,可以关注【泡泡机器人SLAM】公众号(paopaorobot_slam)。
欢迎来到泡泡论坛,这里有大牛为你解答关于SLAM的任何疑惑。
有想问的问题,或者想刷帖回答问题,泡泡论坛欢迎你!
泡泡网站:www.paopaorobot.org
泡泡论坛:http://paopaorobot.org/forums/
泡泡机器人SLAM的原创内容均由泡泡机器人的成员花费大量心血制作而成,希望大家珍惜我们的劳动成果,转载请务必注明出自【泡泡机器人SLAM】微信公众号,否则侵权必究!同时,我们也欢迎各位转载到自己的朋友圈,让更多的人能进入到SLAM这个领域中,让我们共同为推进中国的SLAM事业而努力!
商业合作及转载请联系liufuqiang_robot@hotmail.com