Self-Supervised Video Hashing (SSVH) models learn to generate short binary representations for videos without ground-truth supervision, facilitating large-scale video retrieval efficiency and attracting increasing research attention. The success of SSVH lies in the understanding of video content and the ability to capture the semantic relation among unlabeled videos. Typically, state-of-the-art SSVH methods consider these two points in a two-stage training pipeline, where they firstly train an auxiliary network by instance-wise mask-and-predict tasks and secondly train a hashing model to preserve the pseudo-neighborhood structure transferred from the auxiliary network. This consecutive training strategy is inflexible and also unnecessary. In this paper, we propose a simple yet effective one-stage SSVH method called ConMH, which incorporates video semantic information and video similarity relationship understanding in a single stage. To capture video semantic information for better hashing learning, we adopt an encoder-decoder structure to reconstruct the video from its temporal-masked frames. Particularly, we find that a higher masking ratio helps video understanding. Besides, we fully exploit the similarity relationship between videos by maximizing agreement between two augmented views of a video, which contributes to more discriminative and robust hash codes. Extensive experiments on three large-scale video datasets (\ie, FCVID, ActivityNet and YFCC) indicate that ConMH achieves state-of-the-art results. Code is available at https://github.com/huangmozhi9527/ConMH.
翻译:自我浏览的视频 hashing (SSVH) 模式学会在没有地面监督的情况下为视频生成简短的二进制演示, 方便大型视频检索效率并吸引越来越多的研究关注。 SSVH的成功取决于对视频内容的理解, 以及捕捉未贴标签的视频之间语义关系的能力。 通常, 最新的 SSVH 方法将这两个点放在一个双阶段的培训管道中, 首先通过以实例方式对一个辅助网络进行蒙面和预设任务的培训, 其次培训一个散装模型, 以保存从辅助网络传输的伪邻里结构。 这种连续的培训策略是不可灵活且不必要的。 在本文中, 我们提出一个简单而有效的单阶段的 SSVH 方法, 它包含视频语义信息以及视频相似关系的理解。 为了更好地学习, 我们采用了一个编码/ decoder Mcom 结构, 将视频从我们的时间架框中重建视频结构。 特别是, 我们发现, 高级的视频格式化比重有助于视频关系中的一种最高级的视频。