A modern self-supervised learning algorithm typically enforces persistency of the representations of an instance across views. While being very effective on learning holistic image and video representations, such an approach becomes sub-optimal for learning spatio-temporally fine-grained features in videos, where scenes and instances evolve through space and time. In this paper, we present the Contextualized Spatio-Temporal Contrastive Learning (ConST-CL) framework to effectively learn spatio-temporally fine-grained representations using self-supervision. We first design a region-based self-supervised pretext task which requires the model to learn to transform instance representations from one view to another guided by context features. Further, we introduce a simple network design that effectively reconciles the simultaneous learning process of both holistic and local representations. We evaluate our learned representations on a variety of downstream tasks and ConST-CL achieves state-of-the-art results on four datasets. For spatio-temporal action localization, ConST-CL achieves 39.4% mAP with ground-truth boxes and 30.5% mAP with detected boxes on the AVA-Kinetics validation set. For object tracking, ConST-CL achieves 78.1% precision and 55.2% success scores on OTB2015. Furthermore, ConST-CL achieves 94.8% and 71.9% top-1 fine-tuning accuracy on video action recognition datasets, UCF101 and HMDB51 respectively. We plan to release our code and models to the public.
翻译:现代自我监督的学习算法通常强制坚持对各种观点的演示。 虽然这种方法在学习整体图像和视频演示方面非常有效, 但对于在视频中学习spatio- 即时微微微微微微微的外观功能而言, 其场景和场景会随着时空演化而演变。 在本文中, 我们展示了环境化的 Spatio- 时空矛盾学习(ConST- CL) 框架, 以有效学习使用自我监督的快速即时微微微显示。 我们首先设计了一个基于区域自我监督的借口任务, 要求模型学习将实例演示从一个角度转换到另一个角度, 并且由上下文特点指导。 此外, 我们引入了一个简单的网络设计, 有效地调整体和本地代表同时学习过程。 我们评估了我们关于各种下游任务和ConST- CLM(CL) 在四个数据集中取得最新的最新结果。 关于Spastio- 101 行动本地化, ConST- 实现39. 4% mACAP, 地面- blentralalal- train train 标准 和CLA- breal- breal- best 和30.5- breal- breal- bestbral- bestbralbraldard 和CFard bestbralbers