This paper presents a self-supervised method for learning reliable visual correspondence from unlabeled videos. We formulate the correspondence as finding paths in a joint space-time graph, where nodes are grid patches sampled from frames, and are linked by two types of edges: (i) neighbor relations that determine the aggregation strength from intra-frame neighbors in space, and (ii) similarity relations that indicate the transition probability of inter-frame paths across time. Leveraging the cycle-consistency in videos, our contrastive learning objective discriminates dynamic objects from both their neighboring views and temporal views. Compared with prior works, our approach actively explores the neighbor relations of central instances to learn a latent association between center-neighbor pairs (e.g., "hand -- arm") across time, thus improving the instance discrimination. Without fine-tuning, our learned representation outperforms the state-of-the-art self-supervised methods on a variety of visual tasks including video object propagation, part propagation, and pose keypoint tracking. Our self-supervised method also surpasses some fully supervised algorithms designed for the specific tasks.
翻译:本文展示了一种自我监督的方法来从未贴标签的视频中学习可靠的视觉通信。 我们将通信作为在联合空间时间图中寻找路径的方法, 节点是从框架取样的网格补丁, 并且由两种边缘连接在一起:(一) 邻居关系决定空间内部邻居的聚合强度, 以及(二) 类似关系表明跨时间跨框架路径的过渡概率。 利用视频的周期一致性, 我们对比学习目标将动态对象从邻居的观点和时间视角中区分开来。 与先前的工作相比, 我们的方法积极探索中心实例的近邻关系, 以便学习中邻对配( 例如“手臂 手 ” ) 之间的潜在联系, 从而改进实例歧视。 不进行微调, 我们所学的代言胜过各种视觉任务, 包括视频对象传播、 部分传播 和 显示关键点跟踪等视觉任务 。 我们的自我监督方法也超越了为具体任务设计的一些完全受监督的算法 。