Object tracking based on retina-inspired and event-based dynamic vision sensor (DVS) is challenging for the noise events, rapid change of event-stream shape, chaos of complex background textures, and occlusion. To address these challenges, this paper presents a robust event-stream pattern tracking method based on correlative filter mechanism. In the proposed method, rate coding is used to encode the event-stream object in each segment. Feature representations from hierarchical convolutional layers of a deep convolutional neural network (CNN) are used to represent the appearance of the rate encoded event-stream object. The results prove that our method not only achieves good tracking performance in many complicated scenes with noise events, complex background textures, occlusion, and intersected trajectories, but also is robust to variable scale, variable pose, and non-rigid deformations. In addition, this correlative filter based event-stream tracking has the advantage of high speed. The proposed approach will promote the potential applications of these event-based vision sensors in self-driving, robots and many other high-speed scenes.
翻译:基于视网膜感动和事件动态视觉传感器(DVS)的物体跟踪对噪音事件、事件流形状的快速变化、复杂背景纹理的混乱和封闭性都具有挑战性。为了应对这些挑战,本文件介绍了一种基于相关过滤机制的动态动态动态跟踪方法。在拟议方法中,使用速率编码将事件流天体编码在每一段段内。使用深卷动神经网络(CNN)的级级变动层的特征显示来代表已编码事件流物体的出现率。结果证明,我们的方法不仅在许多复杂的场景中取得了良好的跟踪性能,这些场景包括噪音事件、复杂背景纹理、闭合和交叉轨迹,而且对可变规模、变形和非硬化变形都具有很强的反形功能。此外,基于事件流轨迹的这种相关过滤性跟踪具有高速度的优势。拟议方法将促进这些以事件为基础的视觉传感器在自驾驶、机器人和其他高速场景中的潜在应用。