Due to their resilience to motion blur and high robustness in low-light and high dynamic range conditions, event cameras are poised to become enabling sensors for vision-based exploration on future Mars helicopter missions. However, existing event-based visual-inertial odometry (VIO) algorithms either suffer from high tracking errors or are brittle, since they cannot cope with significant depth uncertainties caused by an unforeseen loss of tracking or other effects. In this work, we introduce EKLT-VIO, which addresses both limitations by combining a state-of-the-art event-based frontend with a filter-based backend. This makes it both accurate and robust to uncertainties, outperforming event- and frame-based VIO algorithms on challenging benchmarks by 32%. In addition, we demonstrate accurate performance in hover-like conditions (outperforming existing event-based methods) as well as high robustness in newly collected Mars-like and high-dynamic-range sequences, where existing frame-based methods fail. In doing so, we show that event-based VIO is the way forward for vision-based exploration on Mars.
翻译:由于其在低光和高动态范围条件下运动模糊和高度稳健的复原力,事件摄像机准备成为未来火星直升机飞行任务基于视觉探索的感应器,但是,现有的事件光学测量算法要么存在高度跟踪错误,要么是易碎的,因为它们无法应对由于意外的跟踪损失或其他影响而造成的重大深度不确定性。在这项工作中,我们引入了EKLT-VIO,通过将最先进的事件前端与基于过滤器的后端结合起来,解决了这两种限制。这使得它既准确又强有力,对不确定性、超强事件和基于框架的VIO的具有挑战性的基准算法,达到32%。此外,我们展示了在类似悬浮状态的条件下(比现有的事件方法要差)以及新收集的火星类和高动力-波测序列的精确性能,而现有的基于框架的方法无法奏效。我们这样做表明,基于事件的VIO是火星上基于愿景的探索的前进道路。