Large-scale vision-language pre-trained models have shown promising transferability to various downstream tasks. As the size of these foundation models and the number of downstream tasks grow, the standard full fine-tuning paradigm becomes unsustainable due to heavy computational and storage costs. This paper proposes UniAdapter, which unifies unimodal and multimodal adapters for parameter-efficient cross-modal adaptation on pre-trained vision-language models. Specifically, adapters are distributed to different modalities and their interactions, with the total number of tunable parameters reduced by partial weight sharing. The unified and knowledge-sharing design enables powerful cross-modal representations that can benefit various downstream tasks, requiring only 1.0%-2.0% tunable parameters of the pre-trained model. Extensive experiments on 6 cross-modal downstream benchmarks (including video-text retrieval, image-text retrieval, VideoQA, and VQA) show that in most cases, UniAdapter not only outperforms the state-of-the-arts, but even beats the full fine-tuning strategy. Particularly, on the MSRVTT retrieval task, UniAdapter achieves 49.7% recall@1 with 2.2% model parameters, outperforming the latest competitors by 2.0%. The code and models are available at https://github.com/RERV/UniAdapter.
翻译:由于计算和储存费用高昂,标准的全面微调范式因标准全额调整范式变得不可持续。本文提议UniAdapter, 统一单式和多式适应器,用于在预先培训的视觉语言模型上进行参数高效的跨模式适应。具体地说,适应器分布在不同的模式及其相互作用上,金枪鱼可选参数的总数通过部分重量共享而减少。统一和知识共享设计使得强大的跨模式代表形式能够使各种下游任务受益,只需要1.0%-2.0%.0%预先培训模式的金枪鱼参数。关于6个跨模式下游基准的广泛实验(包括视频文本检索、图像文本检索、视频QA和VQA)表明,在大多数情况下,适应器不仅超越了现状,甚至击败了全面调整战略。 特别是,MSRVT检索模型、UnAdapterSureb/OVQA)在最新版本中实现了2.2%的代码。