In computer vision, it has achieved great transfer learning performance via adapting large-scale pretrained vision models (e.g., vision transformers) to downstream tasks. Common approaches for model adaptation either update all model parameters or leverage linear probes. In this paper, we aim to study parameter-efficient model adaptation strategies for vision transformers on the image classification task. We formulate efficient model adaptation as a subspace training problem and perform a comprehensive benchmarking over different efficient adaptation methods. We conduct an empirical study on each efficient model adaptation method focusing on its performance alongside parameter cost. Furthermore, we propose a parameter-efficient model adaptation framework, which first selects submodules by measuring local intrinsic dimensions and then projects them into subspace for further decomposition via a novel Kronecker Adaptation (KAdaptation) method. We analyze and compare our method with a diverse set of baseline model adaptation methods (including state-of-the-art methods for pretrained language models). Our method performs the best in terms of the tradeoff between accuracy and parameter efficiency across 20 image classification datasets under the few-shot setting and 7 image classification datasets under the full-shot setting.
翻译:在计算机愿景方面,它通过将大规模预先培训的视觉模型(例如,视觉变压器)适应到下游任务,实现了巨大的转移学习业绩。模型适应的共同方法要么更新所有模型参数,要么利用线性探测器。在本文件中,我们的目标是研究关于图像分类任务的视觉变压器的具有参数效率的模型适应战略;我们将有效的模型适应作为一个子空间培训问题,并对不同的高效适应方法进行全面基准;我们就每一种有效的模型适应方法进行一项经验研究,侧重于其与参数成本同时的性能。此外,我们提议了一个参数效率适应框架,首先通过测量本地内在层面来选择子模块,然后将它们投放入子空间,以便通过新型的Kronecker适应(Kaptation)方法进一步拆解。我们分析并比较我们的方法与一套不同的基线模型适应方法(包括预先培训的语言模型的最新方法),我们的方法在对20个图像分类数据集的准确性和参数效率进行最佳的权衡方面,这20个图像分类数据集在几张照片下和全面设定的7个图像分类数据集下进行最佳的权衡。