The pretrain-then-finetune paradigm has been widely adopted in computer vision. But as the size of Vision Transformer (ViT) grows exponentially, the full finetuning becomes prohibitive in view of the heavier storage overhead. Motivated by parameter-efficient transfer learning (PETL) on language transformers, recent studies attempt to insert lightweight adaptation modules (e.g., adapter layers or prompt tokens) to pretrained ViT and only finetune these modules while the pretrained weights are frozen. However, these modules were originally proposed to finetune language models and did not take into account the prior knowledge specifically for visual tasks. In this paper, we propose to construct Convolutional Bypasses (Convpass) in ViT as adaptation modules, introducing only a small amount (less than 0.5% of model parameters) of trainable parameters to adapt the large ViT. Different from other PETL methods, Convpass benefits from the hard-coded inductive bias of convolutional layers and thus is more suitable for visual tasks, especially in the low-data regime. Experimental results on VTAB-1K benchmark and few-shot learning datasets show that Convpass outperforms current language-oriented adaptation modules, demonstrating the necessity to tailor vision-oriented adaptation modules for adapting vision models.
翻译:在计算机视野中,人们广泛采用了前天-后天-纤维网范式。但是,随着视野变异器(VIT)的大小成倍增长,全面微调变得令人望而生畏,因为储量的封顶更重。在语言变压器中,受参数效率转移学习(PETL)的驱动,最近试图将轻量适应模块(如适配层或即时象征物)插入经过预先训练的VIT,而只是对这些模块进行微调,而预先训练的重量却被冻结。然而,这些模块最初是为微调语言模型而提出的,没有考虑到先前专门用于视觉任务的知识。在本文件中,我们提议在VIT的适应模块中,作为适应模块,只引入少量(低于模型参数的0.5%)可训练参数,以适应大型VETT。不同于其他的PETL方法,Conpasseams从演化层的硬编码偏向偏重偏,从而更适合视觉任务,特别是在低数据系统中。VTAB-1K的实验结果,在VATB-1K的模型中,对当前愿景调整的模块展示了方向需要的调整。