Visual-inertial-odometry has attracted extensive attention in the field of autonomous driving and robotics. The size of Field of View (FoV) plays an important role in Visual-Odometry (VO) and Visual-Inertial-Odometry (VIO), as a large FoV enables to perceive a wide range of surrounding scene elements and features. However, when the field of the camera reaches the negative half plane, one cannot simply use [u,v,1]^T to represent the image feature points anymore. To tackle this issue, we propose LF-VIO, a real-time VIO framework for cameras with extremely large FoV. We leverage a three-dimensional vector with unit length to represent feature points, and design a series of algorithms to overcome this challenge. To address the scarcity of panoramic visual odometry datasets with ground-truth location and pose, we present the PALVIO dataset, collected with a Panoramic Annular Lens (PAL) system with an entire FoV of 360x(40-120) degrees and an IMU sensor. With a comprehensive variety of experiments, the proposed LF-VIO is verified on both the established PALVIO benchmark and a public fisheye camera dataset with a FoV of 360x(0-93.5) degrees. LF-VIO outperforms state-of-the-art visual-inertial-odometry methods. Our dataset and code are made publicly available at https://github.com/flysoaryun/LF-VIO
翻译:视觉光学测量在自主驾驶和机器人领域引起广泛关注。 视觉场的大小在视觉- Odo测量(VO)和视觉- 异性- Odo测量(VIO)中起着重要作用, 因为大型的FOV能够感知周围一系列广泛的场景元素和特征。 然而, 当相机场到达负半平面时, 我们无法简单地使用 [u,v,1,1]T来代表图像特征点。 为了解决这个问题, 我们建议使用LF- VO, 一个具有非常大FOV的相机实时VIO框架。 我们利用三维矢矢量的维矢量来代表特征点, 设计一系列的算法来克服这一挑战。 为解决全光学视觉光学测量数据集的稀缺问题, 我们用全景层红心红心激光仪(PAL)收集的PALVIO数据集, 整个 FOVV( 40- 120) 度和 IMU 传感器的实时VIO 框架。 我们的直径- VIO- O- VI- dro- dro- droisal dal 数据, 以全面的直观性实验为基础, 和SVO- VIVO- sal- droma- droma- drobs- sal- sal- drol- salmalmalmal 进行了我们现有的数据, 10- disl- drobal 和PVVI- sal- droma- dismal- dismal- dismex 。 。 。