This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.
翻译:本文介绍了一个用于实时运行的低功率移动和嵌入设备视频中天体探测的在线模型。 我们的方法是将快速单一图像天体探测与动态长期短期内存层结合起来,以建立一个相互交织的经常性革命结构。 此外,我们提出一个高效的瓶盖LSTM层,与常规LSTMS相比,大幅降低计算成本。 我们的网络通过使用Bottleneck-LSTMs来改进和传播跨框架的地貌图,实现了时间意识。 这个方法大大快于视频中现有的探测方法,在模型大小和计算成本方面比最快的单一框架模型要快得多,同时在图像网VID 2015 数据集中达到与更昂贵的单一框架模型相当的精确度。 我们的模型在移动CPU上达到高达15个FPS的实时推断速度。