Videos are a rich source for self-supervised learning (SSL) of visual representations due to the presence of natural temporal transformations of objects. However, current methods typically randomly sample video clips for learning, which results in a poor supervisory signal. In this work, we propose PreViTS, an SSL framework that utilizes an unsupervised tracking signal for selecting clips containing the same object, which helps better utilize temporal transformations of objects. PreViTS further uses the tracking signal to spatially constrain the frame regions to learn from and trains the model to locate meaningful objects by providing supervision on Grad-CAM attention maps. To evaluate our approach, we train a momentum contrastive (MoCo) encoder on VGG-Sound and Kinetics-400 datasets with PreViTS. Training with PreViTS outperforms representations learnt by MoCo alone on both image recognition and video classification downstream tasks, obtaining state-of-the-art performance on action classification. PreViTS helps learn feature representations that are more robust to changes in background and context, as seen by experiments on image and video datasets with background changes. Learning from large-scale uncurated videos with PreViTS could lead to more accurate and robust visual feature representations.
翻译:由于天体的自然时间变异,视频是自我监督的视觉表现学习(SSL)的丰富来源。然而,目前的方法通常是随机抽样视频剪辑学习,导致监督信号差。在这项工作中,我们提议PreviTS,这是一个SSL框架,利用一个不受监督的跟踪信号来选择含有同一天体的剪辑,帮助更好地利用天体的时间变异。PreVITS还利用跟踪信号在空间上限制框架区域,以便通过对格拉德-CAM注意地图进行监督来学习并培训模型以定位有意义的物体。为了评估我们的方法,我们在VGG-Sound和动因学-400数据集上培养一种动态对比(MOCo)编码器,与PreviTS一起进行对比。与PreVTS培训,仅由PreVITS在图像识别和视频分类下游任务上学习,在行动分类上获得最先进的性能。PreviTS帮助学习到背景和背景变化更稳健的特征,从图像和视频图像显示到更稳健的图像。