Visual object tracking under challenging conditions of motion and light can be hindered by the capabilities of conventional cameras, prone to producing images with motion blur. Event cameras are novel sensors suited to robustly perform vision tasks under these conditions. However, due to the nature of their output, applying them to object detection and tracking is non-trivial. In this work, we propose a framework to take advantage of both event cameras and off-the-shelf deep learning for object tracking. We show that reconstructing event data into intensity frames improves the tracking performance in conditions under which conventional cameras fail to provide acceptable results.
翻译:在具有挑战性的运动和光线条件下,常规摄像头的能力可能阻碍视觉物体的追踪,这种能力容易以运动模糊方式制作图像。事件摄像头是适合在这些条件下强有力地执行视觉任务的新型传感器。然而,由于其输出的性质,将其应用于物体探测和跟踪是非三角的。在这项工作中,我们提议一个框架,利用事件摄像头和现成的深层学习进行物体追踪。我们表明,将事件数据重建到强度框架可以改善在常规摄像头无法提供可接受的结果的条件下的跟踪性能。