Detecting abnormal activities in real-world surveillance videos is an important yet challenging task as the prior knowledge about video anomalies is usually limited or unavailable. Despite that many approaches have been developed to resolve this problem, few of them can capture the normal spatio-temporal patterns effectively and efficiently. Moreover, existing works seldom explicitly consider the local consistency at frame level and global coherence of temporal dynamics in video sequences. To this end, we propose Convolutional Transformer based Dual Discriminator Generative Adversarial Networks (CT-D2GAN) to perform unsupervised video anomaly detection. Specifically, we first present a convolutional transformer to perform future frame prediction. It contains three key components, i.e., a convolutional encoder to capture the spatial information of the input video clips, a temporal self-attention module to encode the temporal dynamics, and a convolutional decoder to integrate spatio-temporal features and predict the future frame. Next, a dual discriminator based adversarial training procedure, which jointly considers an image discriminator that can maintain the local consistency at frame-level and a video discriminator that can enforce the global coherence of temporal dynamics, is employed to enhance the future frame prediction. Finally, the prediction error is used to identify abnormal video frames. Thoroughly empirical studies on three public video anomaly detection datasets, i.e., UCSD Ped2, CUHK Avenue, and Shanghai Tech Campus, demonstrate the effectiveness of the proposed adversarial spatio-temporal modeling framework.
翻译:检测真实世界监控视频中的异常活动是一项重要但具有挑战性的任务,因为先前对视频异常的了解通常有限或不存在。尽管已经制定了许多方法来解决这一问题,但其中很少能够有效和高效地捕捉正常的时空模式。此外,现有的作品很少明确考虑到框架层面的当地一致性和视频序列中时间动态的全球一致性。为此,我们提议基于双分辨的双振动反动网络(CT-D2GAN)进行不受监督的视频异常检测。具体地说,我们首先提出一个革命变异器来进行未来框架预测。它包含三个关键组成部分,即:一个革命变异编码器来捕捉输入视频短片的空间信息,一个时间自我注意模块来调节时间动态,以及一个革命变异变变解器来整合磁波时空特征并预测未来框架。接下来,基于双向分析器的视频异常培训程序,它共同考虑一个能够维护本地图像变异变异的图像变异变异变换工具,用于最终的图像变异的图像变异性图像测试框架,用来加强当前变现的图像变现框架。