Camera orientations (i.e., rotation and zoom) govern the content that a camera captures in a given scene, which in turn heavily influences the accuracy of live video analytics pipelines. However, existing analytics approaches leave this crucial adaptation knob untouched, instead opting to only alter the way that captured images from fixed orientations are encoded, streamed, and analyzed. We present MadEye, a camera-server system that automatically and continually adapts orientations to maximize accuracy for the workload and resource constraints at hand. To realize this using commodity pan-tilt-zoom (PTZ) cameras, MadEye embeds (1) a search algorithm that rapidly explores the massive space of orientations to identify a fruitful subset at each time, and (2) a novel knowledge distillation strategy to efficiently (with only camera resources) select the ones that maximize workload accuracy. Experiments on diverse workloads show that MadEye boosts accuracy by 2.9-25.7% for the same resource usage, or achieves the same accuracy with 2-3.7x lower resource costs.
翻译:摄像机方向(即旋转和缩放)决定了摄像机在给定场景中捕捉到的内容,从而极大地影响了实时视频分析管道的准确性。然而,现有的分析方法将这个关键的适应性旋钮留在原处,而选择只改变从固定方向拍摄的图像的编码、流和分析方式。我们提出了MadEye,一种摄像机服务器系统,它自动且持续地适应方向,以最大化适合当前工作负载和资源约束条件下的准确性。为了使用通用的云台-倾斜-变焦(PTZ)摄像机实现这一点,MadEye嵌入了(1)一种快速探索方向空间以识别每个时间点上的有益子集的搜索算法,以及(2)一种新的知识蒸馏策略,以仅使用摄像机资源选择可以最大化工作负载准确性的配置。对多样化的工作负载进行的实验表明,MadEye可以在相同的资源使用情况下将准确性提高2.9-25.7%,或者以2-3.7倍低的资源成本实现相同的准确性。