Attempt to fully explore the fine-grained temporal structure and global-local chronological characteristics for self-supervised video representation learning, this work takes a closer look at exploiting the temporal structure of videos and further proposes a novel self-supervised method named Temporal Contrastive Graph (TCG). In contrast to the existing methods that randomly shuffle the video frames or video snippets within a video, our proposed TCG roots in a hybrid graph contrastive learning strategy to regard the inter-snippet and intra-snippet temporal relationships as self-supervision signals for temporal representation learning. To increase the temporal diversity of features more comprehensively and precisely, our proposed TCG integrates the prior knowledge about the frame and snippet orders into temporal contrastive graph structures, i.e., the intra-/inter- snippet temporal contrastive graph modules. By randomly removing edges and masking node features of the intra-snippet graphs or inter-snippet graphs, our TCG can generate different correlated graph views. Then, specific contrastive losses are designed to maximize the agreement between node embeddings in different views. To learn the global context representation and recalibrate the channel-wise features adaptively, we introduce an adaptive video snippet order prediction module, which leverages the relational knowledge among video snippets to predict the actual snippet orders.Experimental results demonstrate the superiority of our TCG over the state-of-the-art methods on large-scale action recognition and video retrieval benchmarks.
翻译:为了充分探索自我监督的视频代表学习的细微时间结构和全球-本地时间顺序特征,这项工作更仔细地审视利用视频的时间结构,并进一步提出一种名为“时相对比图(TTCG)”的新颖的自我监督方法。 与在视频中随机调整视频框架或视频片段的现有方法相比,我们提议的TCG根根植于一个混合图形对比学习战略中,将片段间和片段内时间关系视为时间代表学习的自我监督信号。为了更加全面和准确地增加时间特征的多样性,我们提议的TCG将先前关于框架和片段秩序的知识纳入一个时间对比图形结构,即内部/间间时间对比图模块。我们提议的TCG根植根于一个混合图形对比性学习战略,以将机组间和片段内时间关系视为时间代表学习的自我监督信号。然后,我们的具体对比性损失旨在尽可能扩大关于框架和片段定型命令的时间多样性的暂时多样性协议,从而在不同的图像分析中引入了我们正平流流层的图像分析中,我们学习了正平流流流流流流流流流流流流流流流流流学的排序。