Recent large-scale video-language pre-trained models have shown appealing performance on various downstream tasks. However, the pre-training process is computationally expensive due to the requirement of millions of video-text pairs and the redundant data structure of each video. To mitigate these problems, we propose LiteVL, which adapts a pre-trained image-language model BLIP into a video-text model directly on downstream tasks, without heavy pre-training. To enhance the temporal modeling lacking in the image-language model, we propose to add temporal attention modules in the image encoder of BLIP with dynamic temporal scaling. Besides the model-wise adaptation, we also propose a non-parametric pooling mechanism to adaptively reweight the fine-grained video embedding conditioned on the text. Experimental results on text-video retrieval and video question answering show that the proposed LiteVL even outperforms previous video-language pre-trained models by a clear margin, though without any video-language pre-training.
翻译:近期大型视频语言预培训模式在各种下游任务上表现出了有吸引力的表现,然而,由于需要数百万个视频文本配对和每个视频的冗余数据结构,培训前过程的计算费用非常昂贵。为了缓解这些问题,我们提议使用LiteVL, 将经过预先培训的图像语言模型BLIP改造成直接针对下游任务的视频文本模型,而无需经过大量培训。为了加强图像语言模式中缺少的时间模型,我们提议在BLIP的图像编码中添加时间关注模块,并进行动态时间缩放。除了采用模型的适应性适应性适应性调整外,我们还提议一个非参数集成机制,以适应对文本上精细的视频嵌入的重量。关于文字视频检索和视频问题的实验结果回答显示,拟议的LiteVL甚至以清晰的边际,尽管没有任何视频语言预培训,但比先前的视频语言模型要短一些。