The need for automated real-time visual systems in applications such as smart camera surveillance, smart environments, and drones necessitates the improvement of methods for visual active monitoring and control. Traditionally, the active monitoring task has been handled through a pipeline of modules such as detection, filtering, and control. However, such methods are difficult to jointly optimize and tune their various parameters for real-time processing in resource constraint systems. In this paper a deep Convolutional Camera Controller Neural Network is proposed to go directly from visual information to camera movement to provide an efficient solution to the active vision problem. It is trained end-to-end without bounding box annotations to control a camera and follow multiple targets from raw pixel values. Evaluation through both a simulation framework and real experimental setup, indicate that the proposed solution is robust to varying conditions and able to achieve better monitoring performance than traditional approaches both in terms of number of targets monitored as well as in effective monitoring time. The advantage of the proposed approach is that it is computationally less demanding and can run at over 10 FPS (~4x speedup) on an embedded smart camera providing a practical and affordable solution to real-time active monitoring.
翻译:智能相机监视、智能环境和无人驾驶飞机等应用中自动实时直观系统的需求要求改进视觉积极监测和控制的方法。传统上,积极监测任务是通过探测、过滤和控制等模块管道处理的。然而,这些方法很难共同优化和调整资源约束系统中实时处理的各种参数。在本文件中,深层革命相机控制神经网络建议从视觉信息直接到摄影移动,以便为动态视觉问题提供有效的解决办法。它经过培训,无需装箱说明来控制相机并跟踪原始像素值的多重目标。通过模拟框架和实际实验设置进行的评价表明,拟议的解决方案在监测目标数量和有效监测时间方面,都对不同条件和能够取得比传统方法更好的监测业绩。拟议方法的优点是,从计算上来说要求较低,可以超过10FPS(~4x速度),对嵌入的智能相机可提供实际和负担得起的实时监测解决办法。