Motion, as the most distinct phenomenon in a video to involve the changes over time, has been unique and critical to the development of video representation learning. In this paper, we ask the question: how important is the motion particularly for self-supervised video representation learning. To this end, we compose a duet of exploiting the motion for data augmentation and feature learning in the regime of contrastive learning. Specifically, we present a Motion-focused Contrastive Learning (MCL) method that regards such duet as the foundation. On one hand, MCL capitalizes on optical flow of each frame in a video to temporally and spatially sample the tubelets (i.e., sequences of associated frame patches across time) as data augmentations. On the other hand, MCL further aligns gradient maps of the convolutional layers to optical flow maps from spatial, temporal and spatio-temporal perspectives, in order to ground motion information in feature learning. Extensive experiments conducted on R(2+1)D backbone demonstrate the effectiveness of our MCL. On UCF101, the linear classifier trained on the representations learnt by MCL achieves 81.91% top-1 accuracy, outperforming ImageNet supervised pre-training by 6.78%. On Kinetics-400, MCL achieves 66.62% top-1 accuracy under the linear protocol. Code is available at https://github.com/YihengZhang-CV/MCL-Motion-Focused-Contrastive-Learning.
翻译:作为涉及时间变化的视频中最独特的动态,作为涉及时间变化的动态,对于视频代表学习的发展是独特和至关重要的。在本文中,我们提出这样一个问题:该动作对于自我监督的视频代表学习特别重要。为此,我们在对比性学习体系中形成了利用数据增强和特征学习运动和特征学习的决断。具体地说,我们提出了一个以动态为重点的反竞争学习(MCL)方法,将这种决断视为基础。一方面,MCL利用每个框架的光学流,通过视频从时间和空间上抽取输油管(即相关框架的顺序,对时间和空间进行抽样调查)作为数据增强。另一方面,MCLL进一步将变动层的梯度地图与从空间、时间和空间-时间-时间-时空-时空学角度的光源流图相匹配。在R(2+1)D骨干上进行的广泛实验,展示了我们的MCLUC的实效。在UCF101上,在MCL-L-VS-I上培训的直线性分类前,在MCL-IL-IL-S-S-IL-ILS-ILS-ILS-ILS-ILS-S-S-ILS-ILS-ILS-ILS-ILS-ILS-ILS-IS-IS-IS-IS-IS-IS-IS-S-S-IS-IS-IS-S-S-S-S-S-S-S-S-ILS-S-S-S-S-IS-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-S-IL-S-ILS-IL-IL-IS-IS-IS-IL-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-IS-S-I-I