The pretrain-then-finetune paradigm has been widely adopted in computer vision. But as the size of Vision Transformer (ViT) grows exponentially, the full finetuning becomes prohibitive in view of the heavier storage overhead. Motivated by parameter-efficient transfer learning (PETL) on language transformers, recent studies attempt to insert lightweight adaptation modules (e.g., adapter layers or prompt tokens) to pretrained ViT and only finetune these modules while the pretrained weights are frozen. However, these modules were originally proposed to finetune language models. Although ported well to ViT, their design lacks prior knowledge for visual tasks. In this paper, we propose to construct Convolutional Bypasses (Convpass) in ViT as adaptation modules, introducing only a small amount (less than 0.5% of model parameters) of trainable parameters to adapt the large ViT. Different from other PETL methods, Convpass benefits from the hard-coded inductive bias of convolutional layers and thus is more suitable for visual tasks, especially in the low-data regime. Experimental results on VTAB-1k benchmark and few-shot learning datasets demonstrate that Convpass outperforms current language-oriented adaptation modules, demonstrating the necessity to tailor vision-oriented adaptation modules for vision models.
翻译:在计算机视野中,人们广泛采用了前天-后天-纤维网范式。但是,随着视野变异器(VIT)的大小成倍增长,全面微调变得令人望而生畏,因为储量的封存管理更重。在语言变异器的参数效率转移学习(PETL)的推动下,最近一些研究试图将轻量适应模块(例如,适配层或即时标志)插入经过预先训练的VIT,而只是对这些模块进行微调;然而,这些模块最初是为微调语言模型而提出的。尽管它们的设计很适合VIT,但缺乏先前的视觉任务知识。在本文中,我们提议在VIT的适应模块中,将Conpassional Bypass(Conpass)作为适应模块,只引入少量(不到模型参数的0.5%)可训练参数来适应大型VIT。不同于其他PETL方法,这些模块的硬调导偏向导偏差,因此更适合视觉任务,特别是在低数据模型中。在VTABS-IS-S-S-Siming Find-shidal iming Basignal Basign 模模模模模版的模型中,实验结果展示了面向的模型的模型的模型的实验结果。