We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras. Event cameras provide visual information with sub-millisecond latency at a high-dynamic range and with strong robustness against motion blur. These unique properties offer great potential for low-latency object detection and tracking in time-critical scenarios. Prior work in event-based vision has achieved outstanding detection performance but at the cost of substantial inference time, typically beyond 40 milliseconds. By revisiting the high-level design of recurrent vision backbones, we reduce inference time by a factor of 5 while retaining similar performance. To achieve this, we explore a multi-stage design that utilizes three key concepts in each stage: First, a convolutional prior that can be regarded as a conditional positional embedding. Second, local- and dilated global self-attention for spatial feature interaction. Third, recurrent temporal feature aggregation to minimize latency while retaining temporal information. RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection - achieving an mAP of 47.5% on the Gen1 automotive dataset. At the same time, RVTs offer fast inference (13 ms on a T4 GPU) and favorable parameter efficiency (5 times fewer than prior art). Our study brings new insights into effective design choices that could be fruitful for research beyond event-based vision.
翻译:我们提出经常的视野变换器(RVTs),这是用来用事件相机探测物体的新基干。事件相机提供高动态范围、动态模糊度强强的视觉信息。这些独特的特性为低长天体探测和在时间危急的情景下跟踪提供了巨大的潜力。以前基于事件的设想已经取得了杰出的探测性能,但代价是相当大的推移时间,通常超过40毫秒。通过重新审视经常的视野主干网的高水平设计,我们可以将推断时间减少5倍,同时保留类似的性能。为了实现这一点,我们探索一个多阶段设计,利用每个阶段的三个关键概念:首先,先是革命性,可以被视为有条件的定位嵌入。第二,基于当地和分散的全球对空间地貌互动的自我关注。第三,反复的时间特征汇总,以最大限度地减少隐蔽度,同时保留时间信息。基于事件的物体探测可以从头到达到最先进的性能,同时保留类似的性能。为了实现47.5%的移动性能,我们每个阶段都利用三个关键概念概念,在G1号前的精确度设计中可以提供比GVSerfirental Drofent 。