Large-scale vision-language models (VLMs) pre-trained on billion-level data have learned general visual representations and broad visual concepts. In principle, the well-learned knowledge structure of the VLMs should be inherited appropriately when being transferred to downstream tasks with limited data. However, most existing efficient transfer learning (ETL) approaches for VLMs either damage or are excessively biased towards the prior knowledge, e.g., prompt tuning (PT) discards the pre-trained text-based classifier and builds a new one while adapter-style tuning (AT) fully relies on the pre-trained features. To address this, we propose a new efficient tuning approach for VLMs named Task Residual Tuning (TaskRes), which performs directly on the text-based classifier and explicitly decouples the prior knowledge of the pre-trained models and new knowledge regarding a target task. Specifically, TaskRes keeps the original classifier weights from the VLMs frozen and obtains a new classifier for the target task by tuning a set of prior-independent parameters as a residual to the original one, which enables reliable prior knowledge preservation and flexible task-specific knowledge exploration. The proposed TaskRes is simple yet effective, which significantly outperforms previous ETL methods (e.g., PT and AT) on 11 benchmark datasets while requiring minimal effort for the implementation. Our code will be available at https://github.com/geekyutao/TaskRes.
翻译:对10亿级数据进行预先培训的大型视觉语言模型(VLM)已经学会了一般的视觉表现和广泛的视觉概念,原则上,在将VLM的深学知识结构转移到数据有限的下游任务时,应当适当地继承VLM的深层次知识结构,然而,对VLM的多数现有高效传输学习方法,要么是损坏的,要么是过分偏向先前的知识,例如,迅速调试(PT)将经过培训的文本分类器的原分类器冷冻起来,并建立一个新的分类器,而适应者式的调试则完全依赖预先训练的特征。为此,我们建议对名为TAsk 后端图的VLMM公司采用新的高效调制方法,直接使用基于文本的分类器,并明确区分对VLMMS的先前知识知识和对目标任务的新知识,例如,TLMS的原有分级分量,通过调整一套以前独立的参数作为原始的剩余值。