We present a simple yet effective end-to-end Video-language Pre-training (VidLP) framework, Masked Contrastive Video-language Pretraining (MAC), for video-text retrieval tasks. Our MAC aims to reduce video representation's spatial and temporal redundancy in the VidLP model by a mask sampling mechanism to improve pre-training efficiency. Comparing conventional temporal sparse sampling, we propose to randomly mask a high ratio of spatial regions and only feed visible regions into the encoder as sparse spatial sampling. Similarly, we adopt the mask sampling technique for text inputs for consistency. Instead of blindly applying the mask-then-prediction paradigm from MAE, we propose a masked-then-alignment paradigm for efficient video-text alignment. The motivation is that video-text retrieval tasks rely on high-level alignment rather than low-level reconstruction, and multimodal alignment with masked modeling encourages the model to learn a robust and general multimodal representation from incomplete and unstable inputs. Coupling these designs enables efficient end-to-end pre-training: reduce FLOPs (60% off), accelerate pre-training (by 3x), and improve performance. Our MAC achieves state-of-the-art results on various video-text retrieval datasets, including MSR-VTT, DiDeMo, and ActivityNet. Our approach is omnivorous to input modalities. With minimal modifications, we achieve competitive results on image-text retrieval tasks.
翻译:我们提出了一个简单而有效的终端到终端视频预培训框架(VidLP),用于视频文本检索任务。我们的MAC的目的是通过遮掩取样机制减少VidLP模式中的视频代表空间和时间冗余,目的是通过遮罩取样机制减少VidLP模式中的视频代表空间和时间冗余,以提高培训前效率。比较传统的时间稀少抽样,我们建议随机掩盖高空间区域比例,只将可见的区域作为空间抽样输入编码器中。同样,我们采用了用于文本输入的遮罩抽样技术,以保持一致性。我们采用遮掩抽样技术,而不是盲地应用MAE的蒙面、正正正前版模式,为高效的视频文本调整提出一个蒙面、正式模式。视频文本检索任务依赖于高层次的调整,而不是低度的重建,与蒙面模型的多式调整鼓励模型从不完全和不稳定的投入中学习稳健和普遍的多式联运代表。这些设计有助于高效率的终端到终端前培训:减少FLOP(60 % 关闭),加速实现视频格式前任务(3x),并改进数据。