Semi-supervised action recognition is a challenging but critical task due to the high cost of video annotations. Existing approaches mainly use convolutional neural networks, yet current revolutionary vision transformer models have been less explored. In this paper, we investigate the use of transformer models under the SSL setting for action recognition. To this end, we introduce SVFormer, which adopts a steady pseudo-labeling framework (ie, EMA-Teacher) to cope with unlabeled video samples. While a wide range of data augmentations have been shown effective for semi-supervised image classification, they generally produce limited results for video recognition. We therefore introduce a novel augmentation strategy, Tube TokenMix, tailored for video data where video clips are mixed via a mask with consistent masked tokens over the temporal axis. In addition, we propose a temporal warping augmentation to cover the complex temporal variation in videos, which stretches selected frames to various temporal durations in the clip. Extensive experiments on three datasets Kinetics-400, UCF-101, and HMDB-51 verify the advantage of SVFormer. In particular, SVFormer outperforms the state-of-the-art by 31.5% with fewer training epochs under the 1% labeling rate of Kinetics-400. Our method can hopefully serve as a strong benchmark and encourage future search on semi-supervised action recognition with Transformer networks.
翻译:半监督的行动识别是一项具有挑战性但关键的任务,因为视频说明费用高昂。 现有方法主要使用进化神经网络,但目前的革命性视觉变异器模型探索较少。 在本文中,我们调查了在SSL 设置下变压器模型的使用,以进行行动识别。 为此,我们引入 SVFormer, 采用稳定的假标签框架( i, EMA- Teacher) 来应对未贴标签的视频样本。 虽然在半监督图像分类中显示了一系列广泛的数据增强效果有效, 但它们通常产生有限的视频识别结果。 因此, 我们引入了新型增强战略, Tube TokenMix, 专门为视频数据定制, 视频剪贴在时轴上, 以一致的蒙面符号混合。 此外, 我们提议一个时间扭曲强化框架, 以覆盖视频的复杂时间变异变, 将选定框架延伸到剪辑中的不同时间段。 在三个数据集中, Kiniticts- 400, UCFC- 101 和HMDB-51, 用来验证SVFormer网络的优势优势,, 以更强的搜索速度, 5, 10x Forformax a lax, lax lax lax lax lax lax lax lax lax lax lax lax laxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx