Event-based cameras can overpass frame-based cameras limitations for important tasks such as high-speed motion detection during self-driving cars navigation in low illumination conditions. The event cameras' high temporal resolution and high dynamic range, allow them to work in fast motion and extreme light scenarios. However, conventional computer vision methods, such as Deep Neural Networks, are not well adapted to work with event data as they are asynchronous and discrete. Moreover, the traditional 2D-encoding representation methods for event data, sacrifice the time resolution. In this paper, we first improve the 2D-encoding representation by expanding it into three dimensions to better preserve the temporal distribution of the events. We then propose 3D-FlowNet, a novel network architecture that can process the 3D input representation and output optical flow estimations according to the new encoding methods. A self-supervised training strategy is adopted to compensate the lack of labeled datasets for the event-based camera. Finally, the proposed network is trained and evaluated with the Multi-Vehicle Stereo Event Camera (MVSEC) dataset. The results show that our 3D-FlowNet outperforms state-of-the-art approaches with less training epoch (30 compared to 100 of Spike-FlowNet).
翻译:以事件为基础的摄像头可以超越基于框架的摄像头限制,以完成重要任务,如在低光度条件下自行驾驶汽车导航时高速运动探测,活动相机的高度时间分辨率和动态范围高,使其能够在快速运动和极端光光线下工作,但是,传统的计算机视觉方法,如深神经网络,不能很好地适应与事件数据一起工作,因为它们不同步和离散。此外,传统的2D事件数据编码代表方法牺牲了时间分辨率。在本文件中,我们首先改进2D编码代表制,将其扩大为三个维度,以更好地保存事件的时间分布。我们然后提议3D-FlowNet,这是一个新的网络结构,可以按照新的编码方法处理3D输入代表制和输出光学流估计。采取了自我监督的培训战略,以弥补事件相机缺少标签数据集的情况。最后,拟议网络与多ViHicol Sterople Action相机(MVSECEC)的培训和评估,以更好地保存事件的时间分布。我们随后提议了一个3D-FlowNet的新网络结构,以100SK-FSK-Flock-Flest-frofrostal 比较了我们3-Sil-Flow-Sil-Flow-Fl-st-st-Silgy-st-Flock-stal-formest-formal-stal-stgy-st-st-forst-stalgy-d-st-stalgy-forgy-formal-stgy-stgy-d-dgal-d-d-d-d-d-d-d-d-d-d-st-d-d-d-d-st-st-st-d-st-d-dal-d-dal-dal-d-d-d-d-d-d-d-d-dal-dal-dal-dal-stal-dal-stal-stal-stal-stal-d-d-d-d-d-d-d-d-d-d-d-d-d-stal-d-d-d-d-d-stal-d-d-d-d-d-d-d-d-