Adapting large-scale pretrained language models to downstream tasks via fine-tuning is the standard method for achieving state-of-the-art performance on NLP benchmarks. However, fine-tuning all weights of models with millions or billions of parameters is sample-inefficient, unstable in low-resource settings, and wasteful as it requires storing a separate copy of the model for each task. Recent work has developed parameter-efficient fine-tuning methods, but these approaches either still require a relatively large number of parameters or underperform standard fine-tuning. In this work, we propose Compacter, a method for fine-tuning large-scale language models with a better trade-off between task performance and the number of trainable parameters than prior work. Compacter accomplishes this by building on top of ideas from adapters, low-rank optimization, and parameterized hypercomplex multiplication layers. Specifically, Compacter inserts task-specific weight matrices into a pretrained model's weights, which are computed efficiently as a sum of Kronecker products between shared ``slow'' weights and ``fast'' rank-one matrices defined per Compacter layer. By only training 0.047% of a pretrained model's parameters, Compacter performs on par with standard fine-tuning on GLUE and outperforms fine-tuning in low-resource settings. Our code is publicly available in https://github.com/rabeehk/compacter/
翻译:通过微调使大规模预先培训的语言模型适应下游任务,是达到NLP基准最新业绩的标准方法。然而,微调模型的所有重量,加上数百万或数十亿参数,抽样效率低,低资源环境不稳定,浪费性,因为它需要为每项任务储存一个单独的模型副本。最近的工作已经开发出具有参数效率的微调方法,但这些方法仍需要数量相对较多的参数或不完善的标准微调。在这项工作中,我们提议Claimer,这是对大型语言模型进行微调的一种方法,在任务业绩和可培训参数数目之间作出更好的权衡。在适应者、低级别优化和参数化超复杂化的多变层中,对模型进行抽样化。具体地说,Claimer将特定任务重量矩阵插入一个经过预先训练的模型重量中,在共享的'sslow'重量和`fast'级标准/Siral-one 矩阵中,在任务性调整前的常规结构中,只对标准/Grassimer 进行升级的测试。