Instance-level contrastive learning techniques, which rely on data augmentation and a contrastive loss function, have found great success in the domain of visual representation learning. They are not suitable for exploiting the rich dynamical structure of video however, as operations are done on many augmented instances. In this paper we propose "Video Cross-Stream Prototypical Contrasting", a novel method which predicts consistent prototype assignments from both RGB and optical flow views, operating on sets of samples. Specifically, we alternate the optimization process; while optimizing one of the streams, all views are mapped to one set of stream prototype vectors. Each of the assignments is predicted with all views except the one matching the prediction, pushing representations closer to their assigned prototypes. As a result, more efficient video embeddings with ingrained motion information are learned, without the explicit need for optical flow computation during inference. We obtain state-of-the-art results on nearest neighbour video retrieval and action recognition, outperforming previous best by +3.2% on UCF101 using the S3D backbone (90.5% Top-1 acc), and by +7.2% on UCF101 and +15.1% on HMDB51 using the R(2+1)D backbone.
翻译:依赖数据增强和对比性损失功能的实中对比性学习技术在视觉演示学习领域取得了巨大成功,但并不适合于利用丰富的视频动态结构,因为视频的动态结构丰富,但随着许多增强实例的操作,这些技术并不适合于利用视频的丰富动态结构。在本文中,我们建议使用“Video Cross-Stream Protocid 对比”这一创新方法,从 RGB 和光流视图中预测一致的原型任务,在成套样本上运行。具体地说,我们用优化流程来替代一个流程,同时将所有视图都映射到一组流原型矢量中。每个任务都用所有视图进行预测,但与预测相匹配的视图除外,将演示推近到其指定原型中。结果是,我们学习了更高效的视频嵌入带带带运动信息,而没有在推断过程中明确需要光流计算。我们从最近的邻居视频检索和行动识别中获得最新结果,在使用S3D脊柱(9.0.5% Top-1a-acc)和R-7.2.2% HFMDMDM.