Despite the success of neural networks in computer vision tasks, digital 'neurons' are a very loose approximation of biological neurons. Today's learning approaches are designed to function on digital devices with digital data representations such as image frames. In contrast, biological vision systems are generally much more capable and efficient than state-of-the-art digital computer vision algorithms. Event cameras are an emerging sensor technology which imitates biological vision with asynchronously firing pixels, eschewing the concept of the image frame. To leverage modern learning techniques, many event-based algorithms are forced to accumulate events back to image frames, somewhat squandering the advantages of event cameras. We follow the opposite paradigm and develop a new type of neural network which operates closer to the original event data stream. We demonstrate state-of-the-art performance in angular velocity regression and competitive optical flow estimation, while avoiding difficulties related to training SNN. Furthermore, the processing latency of our proposed approached is less than 1/10 any other implementation, while continuous inference increases this improvement by another order of magnitude.
翻译:尽管在计算机视觉任务中神经网络取得了成功,但数字“中子”是生物神经非常松散的近似物。今天的学习方法设计在数字设备上运行,具有图像框等数字数据表象。相反,生物视觉系统一般比最先进的数字计算机视觉算法更有能力和效率。事件摄像机是一种新兴的感应技术,它以无同步的发射像素模仿生物视觉,回避图像框架的概念。为了利用现代学习技术,许多基于事件的算法被迫将事件积累到图像框中,多少浪费了事件相机的优势。我们遵循相反的范式,并开发出一种新型神经网络,其运行更接近原始事件数据流。我们展示了在角速度回归和竞争性光学流量估计方面最先进的性能,同时避免了与培训SNN有关的困难。此外,我们拟议方法的处理强度小于1/10,而不断推论则将这种改进增加另一级。