This chapter aims to aid the development of Cyber-Physical Systems (CPS) in automated understanding of events and activities in various applications of video-surveillance. These events are mostly captured by drones, CCTVs or novice and unskilled individuals on low-end devices. Being unconstrained, these videos are immensely challenging due to a number of quality factors. We present an extensive account of the various approaches taken to solve the problem over the years. This ranges from methods as early as Structure from Motion (SFM) based approaches to recent solution frameworks involving deep neural networks. We show that the long-term motion patterns alone play a pivotal role in the task of recognizing an event. Consequently each video is significantly represented by a fixed number of key-frames using a graph-based approach. Only the temporal features are exploited using a hybrid Convolutional Neural Network (CNN) + Recurrent Neural Network (RNN) architecture. The results we obtain are encouraging as they outperform standard temporal CNNs and are at par with those using spatial information along with motion cues. Further exploring multistream models, we conceive a multi-tier fusion strategy for the spatial and temporal wings of a network. A consolidated representation of the respective individual prediction vectors on video and frame levels is obtained using a biased conflation technique. The fusion strategy endows us with greater rise in precision on each stage as compared to the state-of-the-art methods, and thus a powerful consensus is achieved in classification. Results are recorded on four benchmark datasets widely used in the domain of action recognition, namely CCV, HMDB, UCF-101 and KCV. It is inferable that focusing on better classification of the video sequences certainly leads to robust actuation of a system designed for event surveillance and object cum activity tracking.
翻译:本章旨在协助开发网络物理系统(CPS),以自动理解视频监控各种应用中的事件和活动。这些事件大多由无人机、闭路电视或新手和非技术个人在低端装置上捕获。由于不受限制,这些视频由于若干质量因素而具有巨大的挑战性。我们广泛介绍了多年来为解决这一问题所采取的各种办法。这从早期从基于动态的UFM(UFM)方法到涉及深层神经网络的近期解决方案框架。我们显示,长期运动模式本身在识别事件的任务中发挥着关键作用。因此,每个视频大多由固定数量的关键框架在低端装置上捕获。由于不受限制,这些视频视频功能只是利用混合神经网络(CNN)+常规神经网络(RNNN)架构来加以利用。我们得到的结果令人鼓舞,因为它们比标准时间CNNM(S)系统(SFM)更接近于使用空间信息以及运动提示。我们进一步探索多流模型,我们把多层次的动态精确度定位系统(K-C)定位网络中的多层次的精确度定位战略,也就是以空间和时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-时间-