In this paper, we propose a new, simple, and effective Self-supervised Spatio-temporal Transformers (SPARTAN) approach to Group Activity Recognition (GAR) using unlabeled video data. Given a video, we create local and global Spatio-temporal views with varying spatial patch sizes and frame rates. The proposed self-supervised objective aims to match the features of these contrasting views representing the same video to be consistent with the variations in spatiotemporal domains. To the best of our knowledge, the proposed mechanism is one of the first works to alleviate the weakly supervised setting of GAR using the encoders in video transformers. Furthermore, using the advantage of transformer models, our proposed approach supports long-term relationship modeling along spatio-temporal dimensions. The proposed SPARTAN approach performs well on two group activity recognition benchmarks, including NBA and Volleyball datasets, by surpassing the state-of-the-art results by a significant margin in terms of MCA and MPCA metrics.
翻译:本文提出了一种新的、简单有效的自监督 SPATIAL-TEMPORAL TRANSFORMERS(SPARTAN)方法,用于使用未标记的视频数据进行群体活动识别。给定一个视频,我们使用不同的空间补丁大小和帧速率创建局部和全局时空视图。所提出的自监督目标的目的是匹配这些表示同一视频的对比视图的特征,以使在空间-时间域中的变化得到一致。据我们所知,所提出的机制是在使用视频变换器中的编码器减轻弱监督设置的 GAR 的几种方法之一。此外,利用变换器模型的优势,我们提出的方法支持沿着时空维度进行长期关系建模。所提出的 SPARTAN 方法在两个群体活动识别基准数据集,包括 NBA 和排球数据集中表现出色,以 MCA 和 MPCA 指标显著超越了当前最先进的结果。