Single shot detectors that are potentially faster and simpler than two-stage detectors tend to be more applicable to object detection in videos. Nevertheless, the extension of such object detectors from image to video is not trivial especially when appearance deterioration exists in videos, \emph{e.g.}, motion blur or occlusion. A valid question is how to explore temporal coherence across frames for boosting detection. In this paper, we propose to address the problem by enhancing per-frame features through aggregation of neighboring frames. Specifically, we present Single Shot Video Object Detector (SSVD) -- a new architecture that novelly integrates feature aggregation into a one-stage detector for object detection in videos. Technically, SSVD takes Feature Pyramid Network (FPN) as backbone network to produce multi-scale features. Unlike the existing feature aggregation methods, SSVD, on one hand, estimates the motion and aggregates the nearby features along the motion path, and on the other, hallucinates features by directly sampling features from the adjacent frames in a two-stream structure. Extensive experiments are conducted on ImageNet VID dataset, and competitive results are reported when comparing to state-of-the-art approaches. More remarkably, for $448 \times 448$ input, SSVD achieves 79.2% mAP on ImageNet VID, by processing one frame in 85 ms on an Nvidia Titan X Pascal GPU. The code is available at \url{https://github.com/ddjiajun/SSVD}.
翻译:单镜头探测器可能比两阶段探测器更快、更简单,但往往更适用于视频中的物体探测。然而,这种物体探测器从图像到视频的扩展并非微不足道,特别是当视频、\emph{例如}、运动模糊或隐蔽的外观出现变质时。一个有效的问题是如何探索跨框架的时间一致性,以促进探测。在本文中,我们提议通过相邻框架的聚合来提高每个框架的特征来解决这一问题。具体地说,我们介绍单一射击视频物件探测器(SSVD) -- -- 一种新结构,将特征聚合成一阶段探测器,用于视频中的物体探测。从技术上讲,SSVD采用Fetatur Pyramid网络(FPN)作为主干网,以产生多尺度特征。与现有的特征集成方法不同,SSVVD,一方面,通过相邻框架直接取样,两流结构中的相邻框架的致幻象特征。在图像NetVIDDDD中进行广泛实验,在图像网络中进行4-D数据处理,而竞争性的图像框架则使用4-A-lab 。