This paper presents SimVTP: a Simple Video-Text Pretraining framework via masked autoencoders. We randomly mask out the spatial-temporal tubes of input video and the word tokens of input text and then feed them into a unified autencoder to reconstruct the missing pixels and words. Our SimVTP has several properties: 1) Thanks to the unified autoencoder, SimVTP reconstructs the masked signal of one modality with the help from another modality, which implicitly learns the cross-modal alignment between video tubes and text tokens. 2) SimVTP not only benefits from a high video masking ratio (e.g. 90%) due to the temporal redundancy of video, but also needs a high text masking ratio (e.g. 75%), which is much higher than BERT (e.g. 15%), to achieve optimal performance. This is because the aid of video modality makes text reconstruction less challenging, which thus needs a higher mask ratio to make the pretext harder for useful feature learning. 3) Equipping SimVTP with video-text contrastive learning (VTC) and video-text matching (VTM), which are two commonly used cross-modal training strategies, could further improve the transferable performance significantly. 4) SimVTP is dataefficent, e.g., pre-training only on 10% data of WebVid-2M, SimVTP achieves surprisingly good results (43.8 R@1) on MSRVTT, which is far above recent state-of-the-art methods pre-trained on both CC3M and WebVid-2M. We transfer our pre-trained model to various downstream tasks and achieve superior performance. The codes and models will be released at https://github.com/mayuelala/SimVTP.
翻译:本文展示了 SimVTP : 一个简单的 VIV- Text 预设训练框架 。 我们随机遮盖输入视频和输入文本的文字符号的空间时空管( 如 90% ), 然后将其装入一个统一的 Outenencoder, 以重建缺失的像素和单词 。 我们的 SimVTP 有几个属性 :(1) 由于统一的自动校验器, SimVTP 利用另一种方式帮助重建一种模式的遮蔽信号, 从而隐含地学习视频管和文本符号之间的跨模式对齐。 2) SimVTP 不仅受益于高视频遮蔽率( 如 90% ), 还由于视频的超时冗余, 还需要一个高的文本掩码( 如 75% ) 来重建缺失的像素解码。 借助视频模式的帮助, 使文本重建更具挑战性能, 因此只需要更高的遮罩率来更难进行有用的功能学习 。 3 用视频的SimVTP TP 4 和视频MV- mtraal 匹配两个最新学习模式( VTC) 都能够实现最新的超高的图像/ mlt 数据。