We propose a self-supervised visual learning method by predicting the variable playback speeds of a video. Without semantic labels, we learn the spatio-temporal visual representation of the video by leveraging the variations in the visual appearance according to different playback speeds under the assumption of temporal coherence. To learn the spatio-temporal visual variations in the entire video, we have not only predicted a single playback speed but also generated clips of various playback speeds and directions with randomized starting points. Hence the visual representation can be successfully learned from the meta information (playback speeds and directions) of the video. We also propose a new layer dependable temporal group normalization method that can be applied to 3D convolutional networks to improve the representation learning performance where we divide the temporal features into several groups and normalize each one using the different corresponding parameters. We validate the effectiveness of our method by fine-tuning it to the action recognition and video retrieval tasks on UCF-101 and HMDB-51.
翻译:我们提出一种自我监督的视觉学习方法,方法是预测一段视频的可变播放速度。 没有语义标签, 我们通过在时间一致性假设下根据不同播放速度利用视觉外观的变异来学习视频的时空视觉表现。 要学习整个视频的时空微调变异, 我们不仅预测了一个回放速度, 还生成了带有随机启动点的各种回放速度和方向的剪辑。 因此, 从视频的元信息(回放速度和方向)中可以成功学习到视觉表现。 我们还提议一种新的层次可依赖的时间群常规化方法, 适用于3D 共变网络, 以提高代表性学习表现, 即我们将时间特征分成若干组, 并使用不同的对应参数使每个组实现正常化。 我们通过微调其与UCF-101 和 HMDB-51 上的动作识别和视频检索任务相比, 来验证我们的方法的有效性。