Wide field-of-view (FoV) LiDAR sensors provide dense geometry across large environments, but existing LiDAR-inertial-visual odometry (LIVO) systems generally rely on a single camera, limiting their ability to fully exploit LiDAR-derived depth for photometric alignment and scene colorization. We present Omni-LIVO, a tightly coupled multi-camera LIVO system that leverages multi-view observations to comprehensively utilize LiDAR geometric information across extended spatial regions. Omni-LIVO introduces a Cross-View direct alignment strategy that maintains photometric consistency across non-overlapping views, and extends the Error-State Iterated Kalman Filter (ESIKF) with multi-view updates and adaptive covariance. The system is evaluated on public benchmarks and our custom dataset, showing improved accuracy and robustness over state-of-the-art LIVO, LIO, and visual-inertial SLAM baselines. Code and dataset will be released upon publication.
翻译:宽视场(FoV)LiDAR传感器能在广阔环境中提供密集几何信息,但现有LiDAR-惯性-视觉里程计(LIVO)系统通常依赖单一相机,限制了其充分利用LiDAR深度信息进行光度对齐与场景着色的能力。本文提出Omni-LIVO,一种紧耦合多相机LIVO系统,通过多视角观测全面利用扩展空间区域内的LiDAR几何信息。Omni-LIVO引入了跨视角直接对齐策略,在非重叠视场间保持光度一致性,并扩展了误差状态迭代卡尔曼滤波器(ESIKF),支持多视角更新与自适应协方差。该系统在公开基准数据集及我们自建数据集上进行评估,相较于当前最先进的LIVO、LIO及视觉-惯性SLAM基线方法,展现出更高的精度与鲁棒性。代码与数据集将在论文发表时开源。