Optical flow computation with frame-based cameras provides high accuracy but the speed is limited either by the model size of the algorithm or by the frame rate of the camera. This makes it inadequate for high-speed applications. Event cameras provide continuous asynchronous event streams overcoming the frame-rate limitation. However, the algorithms for processing the data either borrow frame like setup limiting the speed or suffer from lower accuracy. We fuse the complementary accuracy and speed advantages of the frame and event-based pipelines to provide high-speed optical flow while maintaining a low error rate. Our bio-mimetic network is validated with the MVSEC dataset showing 19% error degradation at 4x speed up. We then demonstrate the system with a high-speed drone flight scenario where a high-speed event camera computes the flow even before the optical camera sees the drone making it suited for applications like tracking and segmentation. This work shows the fundamental trade-offs in frame-based processing may be overcome by fusing data from other modalities.
翻译:光学流计算与基于框架的相机相比具有很高的准确性,但速度受算法模型大小或相机框架速度的限制。这使得它不足以用于高速应用。事件相机提供连续的无同步事件流以克服框架速度限制。然而,处理数据的算法要么借用框架,例如设置限制速度或降低精确度。我们结合了框架和基于事件管道的互补性准确性和速度优势,以提供高速光学流,同时保持低误差率。我们的生物模拟网络通过MVSEC数据集验证,显示以4x速度上升19%的误差降解。然后我们用高速无人机飞行假设来演示系统,在光学相机看到无人机使其适合跟踪和分解等应用之前,高速事件摄像机即可计算流。这项工作显示,通过从其他方式提取数据,可以克服基于框架的处理中的基本偏差。