Standard fine-tuning of large pre-trained language models (PLMs) for downstream tasks requires updating hundreds of millions to billions of parameters, and storing a large copy of the PLM weights for every task resulting in increased cost for storing, sharing and serving the models. To address this, parameter-efficient fine-tuning (PEFT) techniques were introduced where small trainable components are injected in the PLM and updated during fine-tuning. We propose AdaMix as a general PEFT method that tunes a mixture of adaptation modules -- given the underlying PEFT method of choice -- introduced in each Transformer layer while keeping most of the PLM weights frozen. For instance, AdaMix can leverage a mixture of adapters like Houlsby or a mixture of low rank decomposition matrices like LoRA to improve downstream task performance over the corresponding PEFT methods for fully supervised and few-shot NLU and NLG tasks. Further, we design AdaMix such that it matches the same computational cost and the number of tunable parameters as the underlying PEFT method. By only tuning 0.1-0.2% of PLM parameters, we show that AdaMix outperforms SOTA parameter-efficient fine-tuning and full model fine-tuning for both NLU and NLG tasks.
翻译:为下游任务对大型预先培训语言模型(PLM)进行标准微调,需要更新数亿至数十亿个参数,并为每项任务储存大量PLM重量,从而增加存储、共享和服务模型的成本。为此,引入了参数效率微调(PEFT)技术,在PLM中注入了小型培训组件,并在微调期间更新了这些组件。我们提议AdaMix是一种通用的PEFT方法,根据基本的PEFT选择方法,调和适应模块的组合,在每个变异器层中引入了相同的计算成本和金枪鱼可量参数,同时将大部分PLM重量冻结下来。例如,AdaMix可以利用Houlsby等适应器或LORA等低级分解矩阵混合器的混合物,提高完全监管的PEFTFT方法的下游工作绩效。此外,我们设计AdaMix,使之与NEFTFT法基本方法的计算成本和金枪鱼可参数数相匹配。我们只调整0.1%至0.2%的PLFTFS-M参数的微调制。