Event-based cameras are new type vision sensors whose pixels work independently and respond asynchronously to brightness change with microsecond resolution, instead of provide stand-ard intensity frames. Compared with traditional cameras, event-based cameras have low latency, no motion blur, and high dynamic range (HDR), which provide possibilities for robots to deal with some challenging scenes. We propose a visual-inertial odometry method for stereo event-cameras based on Kalman filtering. The visual module updates the camera pose relies on the edge alignment of a semi-dense 3D map to a 2D image, and the IMU module updates pose by midpoint method. We evaluate our method on public datasets in natural scenes with general 6-DoF motion and compare the results against ground truth. We show that the proposed pipeline provides improved accuracy over the result of a state-of-the-art visual odometry method for stereo event-cameras, while running in real-time on a standard CPU. To the best of our knowledge, this is the first published visual-inertial odometry algorithm for stereo event-cameras.
翻译:事件型相机是一种新型的视觉感应器,其像素独立工作,对亮度变化进行微秒级的异步响应,而不是提供标准的强度帧。与传统相机相比,事件型相机具有低延迟、无运动模糊和高动态范围(HDR)等优点,为机器人处理一些具有挑战性的场景提供了可能性。我们提出了一种基于卡尔曼滤波的用于姿态估计的事件型双目视觉惯性里程计方法。视觉模块通过将半稠密的3D地图与2D图像进行边缘对齐来更新相机姿态,IMU模块通过中点法来更新姿态。我们在具有一般6-DoF运动的自然场景中的公共数据集上评估了我们的方法,并将结果与理论值进行比较。我们展示了所提出的流程相对于事件型相机的最先进的视觉里程计方法可以提供更好的精度,同时在标准CPU上实时运行。据我们所知,这是首个用于事件型双目视觉惯性里程计的算法的发表。