This paper presents the novel idea of generating object proposals by leveraging temporal information for video object detection. The feature aggregation in modern region-based video object detectors heavily relies on learned proposals generated from a single-frame RPN. This imminently introduces additional components like NMS and produces unreliable proposals on low-quality frames. To tackle these restrictions, we present SparseVOD, a novel video object detection pipeline that employs Sparse R-CNN to exploit temporal information. In particular, we introduce two modules in the dynamic head of Sparse R-CNN. First, the Temporal Feature Extraction module based on the Temporal RoI Align operation is added to extract the RoI proposal features. Second, motivated by sequence-level semantic aggregation, we incorporate the attention-guided Semantic Proposal Feature Aggregation module to enhance object feature representation before detection. The proposed SparseVOD effectively alleviates the overhead of complicated post-processing methods and makes the overall pipeline end-to-end trainable. Extensive experiments show that our method significantly improves the single-frame Sparse RCNN by 8%-9% in mAP. Furthermore, besides achieving state-of-the-art 80.3% mAP on the ImageNet VID dataset with ResNet-50 backbone, our SparseVOD outperforms existing proposal-based methods by a significant margin on increasing IoU thresholds (IoU > 0.5).
翻译:本文介绍了利用时间信息生成目标建议的新想法。 现代区域视频目标探测器的功能集成在很大程度上依赖单一框架 RPN 生成的学习建议。 第二, 即将引入NMS 等额外组件, 并生成低质量框架不可靠的建议。 为了应对这些限制, 我们介绍SprassVOD, 这是一种新型视频目标探测管道, 使用Sprass R- CNN 来利用时间信息。 特别是, 我们在Sprass R- CNN 的动态头版中引入了两个模块。 首先, 以Temoral RoI Aliign 操作为基础的时尚特异质提取模块被添加到 RoI 提议中。 第二, 由序列级语义汇总的驱动, 我们引入了关注引导的 Semantical 提案, 低质量框架。 为了在检测之前强化目标特征代表, 我们提出了 SprassVODOD, 使整个管道端至端可受训。 广泛的实验显示,我们的方法在 mAP 中大大改进了单一框架的RCNNNNN 8%-9% 。 此外, 我们用的是正在逐步升级的VIA- smaf- smaf- smaformaformal IM 803 。