Recent self-supervised video representation learning methods have found significant success by exploring essential properties of videos, e.g. speed, temporal order, etc. This work exploits an essential yet under-explored property of videos, the video continuity, to obtain supervision signals for self-supervised representation learning. Specifically, we formulate three novel continuity-related pretext tasks, i.e. continuity justification, discontinuity localization, and missing section approximation, that jointly supervise a shared backbone for video representation learning. This self-supervision approach, termed as Continuity Perception Network (CPNet), solves the three tasks altogether and encourages the backbone network to learn local and long-ranged motion and context representations. It outperforms prior arts on multiple downstream tasks, such as action recognition, video retrieval, and action localization. Additionally, the video continuity can be complementary to other coarse-grained video properties for representation learning, and integrating the proposed pretext task to prior arts can yield much performance gains.
翻译:最近自我监督的视频代表学习方法通过探索视频的基本特性(例如速度、时间顺序等)而取得了显著成功。这项工作利用了视频的基本但尚未得到充分探索的属性、视频连续性,以获得自我监督的代表学习的监督信号。具体地说,我们制定了三项与连续性有关的新的借口任务,即连续性理由、不连续本地化和缺少部分近似,共同监督视频代表学习的共干骨干。这种称为“连续感知网络(CPNet)”的自我监督方法共解决了这三项任务,并鼓励主干网络学习本地和远程运动和背景表现。它比行动识别、视频检索和动作本地化等多个下游任务的前科艺术更强。此外,视频连续性可以补充其他粗微的视频学习视频属性,并将拟议的借口任务与前科相结合,可以产生许多业绩收益。