Visual-inertial-odometry has attracted extensive attention in the field of autonomous driving and robotics. The size of Field of View (FoV) plays an important role in Visual-Odometry (VO) and Visual-Inertial-Odometry (VIO), as a large FoV enables to perceive a wide range of surrounding scene elements and features. However, when the field of the camera reaches the negative half plane, one cannot simply use [u,v,1]^T to represent the image feature points anymore. To tackle this issue, we propose LF-VIO, a real-time VIO framework for cameras with extremely large FoV. We leverage a three-dimensional vector with unit length to represent feature points, and design a series of algorithms to overcome this challenge. To address the scarcity of panoramic visual odometry datasets with ground-truth location and pose, we present the PALVIO dataset, collected with a Panoramic Annular Lens (PAL) system with an entire FoV of 360{\deg}x(40{\deg}-120{\deg}) and an IMU sensor. With a comprehensive variety of experiments, the proposed LF-VIO is verified on both the established PALVIO benchmark and a public fisheye camera dataset with a FoV of 360{\deg}x(0{\deg}-93.5{\deg}). LF-VIO outperforms state-of-the-art visual-inertial-odometry methods. Our dataset and code are made publicly available at https://github.com/flysoaryun/LF-VIO
翻译:视觉- 神经- 视觉- 光度测量在自主驱动和机器人领域引起广泛关注。 视野的大小( FoV) 在视觉- Odomaric (VO) 和视觉- 异性- Odomaric (VIO) 中扮演着重要角色, 因为大型的 FoV 能够感知周围一系列广泛的场景元素和特征。 然而, 当相机的场面到达负半平面时, 我们无法简单地使用 [u,v, 1,1]T 来代表图像特征点。 为了解决这个问题, 我们提议使用LF- VO, 一个实时VIO框架, 用于拥有非常大的 FoV- 直观(40)- 120 的相机实时VIO值框架。 我们利用三维矢量矢量矢量矢量矢量矢量矢量的矢量数据, 在已建立的公共VO- O- ALF 和 IMO- 数据基 上, 我们的 IMULV- 数据 和 IMULF- 的常规 数据 。