Large-scale multi-modal training with image-text pairs imparts strong generalization to CLIP model. Since training on a similar scale for videos is infeasible, recent approaches focus on the effective transfer of image-based CLIP to the video domain. In this pursuit, new parametric modules are added to learn temporal information and inter-frame relationships which require meticulous design efforts. Furthermore, when the resulting models are learned on videos, they tend to overfit on the given task distribution and lack in generalization aspect. This begs the following question: How to effectively transfer image-level CLIP representations to videos? In this work, we show that a simple Video Fine-tuned CLIP (ViFi-CLIP) baseline is generally sufficient to bridge the domain gap from images to videos. Our qualitative analysis illustrates that the frame-level processing from CLIP image-encoder followed by feature pooling and similarity matching with corresponding text embeddings helps in implicitly modeling the temporal cues within ViFi-CLIP. Such fine-tuning helps the model to focus on scene dynamics, moving objects and inter-object relationships. For low-data regimes where full fine-tuning is not viable, we propose a `bridge and prompt' approach that first uses fine-tuning to bridge the domain gap and then learns prompts on language and vision side to adapt CLIP representations. We extensively evaluate this simple yet strong baseline on zero-shot, base-to-novel generalization, few-shot and fully supervised settings across five video benchmarks. Our code is available at https://github.com/muzairkhattak/ViFi-CLIP.
翻译:图像文本配对的大型多式培训对图像- 图像- 文本配对的大型多式培训非常概括化 CLIP 模式。 由于类似的视频规模培训不可行, 最近的方法侧重于将基于图像的 CLIP (ViFi- CLIP) 有效转换到视频域内。 在此过程中, 添加了新的参数模块, 以学习时间信息和框架关系, 需要仔细设计。 此外, 当生成的模型在视频中学习时, 往往过度适应给定任务分配, 缺乏概括化的方面。 这就引出如下问题: 如何有效地将图像级 CLIP 演示转移到视频中? 在这项工作中, 我们展示一个简单的视频- 视频调控 CLIP (ViFi- CLIP) 的五度标准, 通常足以将域域内图像的距离从图像到视频域内。 我们的质量分析显示, CLIP 图像集集集 和类似内容匹配对应的文本嵌入, 有助于隐性地模拟 ViFi- CLIP- CLIP 内的时间提示 。 。 这种微调有助于模型关注现场动态动态动态、 移动对象 和跨基域间关系 。