Transferring image-based object detectors to the domain of video remains challenging under resource constraints. Previous efforts utilised optical flow to allow unchanged features to be propagated, however, the overhead is considerable when working with very slowly changing scenes from applications such as surveillance. In this paper, we propose temporal early exits to reduce the computational complexity of per-frame video object detection. Multiple temporal early exit modules with low computational overhead are inserted at early layers of the backbone network to identify the semantic differences between consecutive frames. Full computation is only required if the frame is identified as having a semantic change to previous frames; otherwise, detection results from previous frames are reused. Experiments on CDnet show that our method significantly reduces the computational complexity and execution of per-frame video object detection up to $34 \times$ compared to existing methods with an acceptable reduction of 2.2\% in mAP.
翻译:在资源限制下,将图像基物体探测器转移到视频领域仍具有挑战性。以往利用光学流进行努力,以传播不变的特征,然而,在与监视等应用程序变化缓慢的场景打交道时,管理费用相当可观。在本文件中,我们提议了时间提前退出,以减少每个框架视频物体探测的计算复杂性。在主干网络的早期插入了多个计算间接费用较低的时间早期退出模块,以确定连续框架之间的语义差异。只有当框架被确定为对以前的框架有语义变化时,才需要全面计算;否则,对以前的框架的检测结果进行再利用。CDnet的实验显示,我们的方法大大降低了每框架视频物体探测的计算复杂性和执行时间,比现有方法减少2.2 ⁇ mAP的可接受比例。