Abnormal event detection (AED) in urban surveillance videos has multiple challenges. Unlike other computer vision problems, the AED is not solely dependent on the content of frames. It also depends on the appearance of the objects and their movements in the scene. Various methods have been proposed to address the AED problem. Among those, deep learning based methods show the best results. This paper is based on deep learning methods and provides an effective way to detect and locate abnormal events in videos by handling spatio temporal data. This paper uses generative adversarial networks (GANs) and performs transfer learning algorithms on pre trained convolutional neural network (CNN) which result in an accurate and efficient model. The efficiency of the model is further improved by processing the optical flow information of the video. This paper runs experiments on two benchmark datasets for AED problem (UCSD Peds1 and UCSD Peds2) and compares the results with other previous methods. The comparisons are based on various criteria such as area under curve (AUC) and true positive rate (TPR). Experimental results show that the proposed method can effectively detect and locate abnormal events in crowd scenes.
翻译:城市监控录像中的异常事件探测(AED)具有多重挑战。与其他计算机视觉问题不同,AED并不完全取决于框架的内容,它也取决于对象的外观及其在现场的移动。提出了各种解决AED问题的方法。其中,基于深层次学习的方法显示了最佳结果。这份文件以深层次学习方法为基础,提供了一种有效的方法,通过处理短时间数据来探测和定位视频中的异常事件。本文使用基因对抗网络(GANs),并用预先培训的神经神经网络(CNN)进行传输学习算法,从而形成准确而有效的模型。实验结果显示,通过处理视频的光学流信息,模型的效率得到进一步提高。本文对AED问题的两个基准数据集(UCSD Peds1和UCSD Peds2)进行了实验,并将结果与其他方法进行比较。比较依据的是各种标准,如曲线(AUC)和真实正率(TPR)下的区域。实验结果表明,拟议的方法可以有效地探测和定位人群中的异常事件。