Event cameras are an exciting, new sensor modality enabling high-speed imaging with extremely low-latency and wide dynamic range. Unfortunately, most machine learning architectures are not designed to directly handle sparse data, like that generated from event cameras. Many state-of-the-art algorithms for event cameras rely on interpolated event representations - obscuring crucial timing information, increasing the data volume, and limiting overall network performance. This paper details an event representation called Time-Ordered Recent Event (TORE) volumes. TORE volumes are designed to compactly store raw spike timing information with minimal information loss. This bio-inspired design is memory efficient, computationally fast, avoids time-blocking (i.e. fixed and predefined frame rates), and contains "local memory" from past data. The design is evaluated on a wide range of challenging tasks (e.g. event denoising, image reconstruction, classification, and human pose estimation) and is shown to dramatically improve state-of-the-art performance. TORE volumes are an easy-to-implement replacement for any algorithm currently utilizing event representations.
翻译:事件摄像头是一种令人兴奋的新传感器模式,使得高速成像能够具有极低的延迟性和广度的动态范围。 不幸的是,大多数机器学习结构的设计不是为了直接处理像事件相机那样的稀少数据。 事件相机的许多最先进的算法都依赖于内插事件表征 - 隐蔽关键的时间信息,增加数据量,并限制整个网络性能。 本文详细介绍了一个名为时间- 时间- 顺序最近事件( TORE ) 的事件表征。 TORE 卷的设计旨在简略地存储原始峰值时间信息,尽量减少信息损失。 这种生物启发型设计是记忆效率高、计算速度快、避免时间阻塞( 即固定和预设框架率 ), 并且包含来自过去数据的“ 本地记忆 ” 。 设计是在一系列具有挑战性的任务( 例如事件分解、 图像重建、 分类和 人类形象估计 ) 上展示了显著的改进最新性能。 TORE 卷是目前使用事件图示的任何算法的简单化替代方法。