Visual-language pre-training has shown great success for learning joint visual-textual representations from large-scale web data, demonstrating remarkable ability for zero-shot generalisation. This paper presents a simple method to efficiently adapt one pre-trained visual-language model to novel tasks with minimal training, and here, we consider video understanding tasks. Specifically, we propose to optimise a few random vectors, termed as continuous prompt vectors, that convert the novel tasks into the same format as the pre-training objectives. In addition, to bridge the gap between static images and videos, temporal information is encoded with lightweight Transformers stacking on top of frame-wise visual features. Experimentally, we conduct extensive ablation studies to analyse the critical components and necessities. On 9 public benchmarks of action recognition, action localisation, and text-video retrieval, across closed-set, few-shot, open-set scenarios, we achieve competitive or state-of-the-art performance to existing methods, despite training significantly fewer parameters.
翻译:视觉语言培训前任务在从大型网络数据中学习视觉-文字共同表现方面表现出极大的成功,展示了对零光概括的非凡能力。本文提供了一个简单的方法,可以有效地将一个经过培训的视觉语言模型适应到经过最低限度培训的新任务中。在这里,我们考虑视频理解任务。具体地说,我们建议优化一些随机矢量,称为连续快速矢量,将新任务转换成与培训前目标相同的格式。此外,为了缩小静态图像和视频之间的差距,时间信息被编码为在框架性视觉特征之上堆叠的轻量级变异器。实验性地,我们进行了广泛的模拟研究,以分析关键组成部分和需要。关于9个行动识别、行动定位和文字视频检索的公众基准,跨封闭设置的、少发的、开放设定的情景,我们实现了竞争性或最先进的业绩,尽管培训的参数要少得多。