The pre-trained image-text models, like CLIP, have demonstrated the strong power of vision-language representation learned from a large scale of web-collected image-text data. In light of the well-learned visual features, some existing works transfer image representation to video domain and achieve good results. However, how to utilize image-language pre-trained model (e.g., CLIP) for video-language pre-training (post-pretraining) is still under explored. In this paper, we investigate two questions: 1) what are the factors hindering post-pretraining CLIP to further improve the performance on video-language tasks? and 2) how to mitigate the impact of these factors? Through a series of comparative experiments and analyses, we find that the data scale and domain gap between language sources have great impacts. Motivated by these, we propose a Omnisource Cross-modal Learning method equipped with a Video Proxy mechanism on the basis of CLIP, namely CLIP-ViP. Extensive results show that our approach improves the performance of CLIP on video-text retrieval by a large margin. Our model also achieves SOTA results on a variety of datasets, including MSR-VTT, DiDeMo, LSMDC, and ActivityNet. We will release our code and pre-trained CLIP-ViP models at https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP.
翻译:在本文中,我们调查了两个问题:(1) 妨碍CLIP培训后进一步改进视频语言任务绩效的因素是什么?和(2) 如何减轻这些因素的影响?通过一系列比较试验和分析,我们发现,语言来源之间的数据规模和域间差距具有很大影响。受这些研究的驱动,我们提议在CLIP的基础上,采用带有视频普罗克西机制的Omnicle 交叉学习方法,即CLIP-ViP。我们采用的方法改进了CLIP在大型视频文本检索方面的业绩,其中包括在CLIP/VIP/Rib-ViP。</s>