Recent advances in computer vision-in the form of deep neural networks-have made it possible to query increasing volumes of video data with high accuracy. However, neural network inference is computationally expensive at scale: applying a state-of-the-art object detector in real time (i.e., 30+ frames per second) to a single video requires a $4000 GPU. In response, we present NoScope, a system for querying videos that can reduce the cost of neural network video analysis by up to three orders of magnitude via inference-optimized model search. Given a target video, object to detect, and reference neural network, NoScope automatically searches for and trains a sequence, or cascade, of models that preserves the accuracy of the reference network but is specialized to the target video and are therefore far less computationally expensive. NoScope cascades two types of models: specialized models that forego the full generality of the reference model but faithfully mimic its behavior for the target video and object; and difference detectors that highlight temporal differences across frames. We show that the optimal cascade architecture differs across videos and objects, so NoScope uses an efficient cost-based optimizer to search across models and cascades. With this approach, NoScope achieves two to three order of magnitude speed-ups (265-15,500x real-time) on binary classification tasks over fixed-angle webcam and surveillance video while maintaining accuracy within 1-5% of state-of-the-art neural networks.
翻译:以深神经网络为形式的计算机视觉的最近进步使得能够以高度精确度对不断增长的视频数据进行数量递增的精确度进行查询。然而,神经网络的推断在规模上计算成本很高:对单一视频实时(即每秒30+框架)应用最先进的天体探测器(即每秒30+框架)需要4000 GPU。对此,我们提出NoScope,这是一个用于查询视频的系统,通过推断-优化模型搜索,可以降低神经网络视频分析的成本,降至3个数量级;鉴于一个目标视频、探测和参考神经网络,神经网络在规模上计算成本昂贵:在实时(即30+框架每秒30+框架)应用最先进的天体物体探测器,实时(即每秒30+框架)应用最先进的天文探测器,因此计算成本成本成本更低。NoScope将两种类型的模型升级为:专门模型将参考模型的全部一般性降低,但将目标视频和对象的行为忠实地模拟;不同探测器,突出跨框架的时间差异。我们表明,最佳的级联结构结构将使用三个成本级的搜索方式。