While task-specific finetuning of pretrained networks has led to significant empirical advances in NLP, the large size of networks makes finetuning difficult to deploy in multi-task, memory-constrained settings. We propose diff pruning as a simple approach to enable parameter-efficient transfer learning within the pretrain-finetune framework. This approach views finetuning as learning a task-specific diff vector that is applied on top of the pretrained parameter vector, which remains fixed and is shared across different tasks. The diff vector is adaptively pruned during training with a differentiable approximation to the L0-norm penalty to encourage sparsity. Diff pruning becomes parameter-efficient as the number of tasks increases, as it requires storing only the nonzero positions and weights of the diff vector for each task, while the cost of storing the shared pretrained model remains constant. It further does not require access to all tasks during training, which makes it attractive in settings where tasks arrive in stream or the set of tasks is unknown. We find that models finetuned with diff pruning can match the performance of fully finetuned baselines on the GLUE benchmark while only modifying 0.5% of the pretrained model's parameters per task.
翻译:尽管对预先培训的网络进行任务性微调,导致在NLP上取得了重大的经验性进展,但网络的庞大规模使得难以在多任务、记忆限制的环境中进行微调。我们提议将diff 细划作为简单的方法,使参数高效的转移学习在预后纤维内框架范围内得以进行。这个方法将微微微微调整视为学习在预先培训的参数矢量上方应用的任务性异向矢量,该特定矢量仍然固定,在不同任务之间共享。 diff 矢量在培训期间适应性地调整,其接近于 L0- 诺姆处罚的不同近似值,以鼓励剧增。随着任务数量的增加, Diff 细划线划成为参数效率,因为它只要求为每项任务储存非零的位置和调控量的 diff 矢量,而存储共享的预培训模型的成本保持不变。它进一步不要求在培训期间接触所有任务,从而在任务到达流中或任务组合未知的环境中具有吸引力。我们发现,与 diff 调整模式的调整后只能与任务性调整模型匹配任务前0.5 基准参数的性,同时对GLUE 基准进行完全修改。