Object detection is a major challenge in computer vision, involving both object classification and object localization within a scene. While deep neural networks have been shown in recent years to yield very powerful techniques for tackling the challenge of object detection, one of the biggest challenges with enabling such object detection networks for widespread deployment on embedded devices is high computational and memory requirements. Recently, there has been an increasing focus in exploring small deep neural network architectures for object detection that are more suitable for embedded devices, such as Tiny YOLO and SqueezeDet. Inspired by the efficiency of the Fire microarchitecture introduced in SqueezeNet and the object detection performance of the single-shot detection macroarchitecture introduced in SSD, this paper introduces Tiny SSD, a single-shot detection deep convolutional neural network for real-time embedded object detection that is composed of a highly optimized, non-uniform Fire sub-network stack and a non-uniform sub-network stack of highly optimized SSD-based auxiliary convolutional feature layers designed specifically to minimize model size while maintaining object detection performance. The resulting Tiny SSD possess a model size of 2.3MB (~26X smaller than Tiny YOLO) while still achieving an mAP of 61.3% on VOC 2007 (~4.2% higher than Tiny YOLO). These experimental results show that very small deep neural network architectures can be designed for real-time object detection that are well-suited for embedded scenarios.
翻译:计算机目标探测是计算机视觉中的一项重大挑战,它涉及物体分类和物体定位。虽然近年来深神经网络显示产生非常强大的技术来应对物体探测的挑战,但使这种物体探测网络能够在嵌入装置上广泛部署的最大挑战之一是高计算和记忆要求。最近,人们越来越重视探索小型深神经网络结构,以探测更适合嵌入装置的物体,如Tiniy YOLO和SqueezeDet。受到在SqueezeNet中引入的火微结构化微结构的效率以及SSD中引入的单发探测大型结构的物体探测性能的启发。本文介绍了使这种物体探测网络能够在嵌入装置上广泛部署的最大挑战之一是高计算和记忆要求。最近,人们越来越重视探索小型的深神经网络结构,这些结构是高度优化的、非单向火壁子子子网络,以及高度优化的 SSDDD-II 辅助结构,专门旨在尽量减少物体探测的模型大小,同时保持物体探测性能。由此产生的TinAP-OOL型系统模型比2007年规模为2.3%。