Fine-tuning large language models for different tasks can be costly and inefficient, and even methods that reduce the number of tuned parameters still require full gradient-based optimization. We propose HyperTuning, a novel approach to model adaptation that uses a hypermodel to generate task-specific parameters for a fixed downstream model. We demonstrate a simple setup for hypertuning with HyperT5, a T5-based hypermodel that produces soft prefixes or LoRA parameters for a frozen T5 model from few-shot examples. We train HyperT5 in two stages: first, hyperpretraining with a modified conditional language modeling objective that trains a hypermodel to generate parameters; second, multi-task fine-tuning (MTF) on a large number of diverse language tasks. We evaluate HyperT5 on P3, MetaICL and Super-NaturalInstructions datasets, and show that it can effectively generate parameters for unseen tasks. Moreover, we show that using hypermodel-generated parameters as initializations for further parameter-efficient fine-tuning improves performance. HyperTuning can thus be a flexible and efficient way to leverage large language models for diverse downstream applications.
翻译:为不同任务微调大型语言模型可能成本高、效率低,甚至减少调频参数数量的方法也需要完全的梯度优化。 我们提议了超图宁,这是一个新型适应模型,使用超模模型为固定的下游模型生成任务特定参数。 我们展示了一种简单的超模结构,用超模5, 一种基于T5的超模, 产生软前缀或LORA参数, 用于从几个例子中冷冻的T5模型。 我们从两个阶段中培训超模T5 : 首先, 超模预, 以修改的有条件语言模型为目标, 以培训超模模型生成参数; 第二, 多任务微调(MTF), 用于大量不同的语言任务。 我们对P3, MetaICL 和 Super-Nationalstruction数据集进行了超模5, 并表明它能够有效地为隐形任务生成参数。 此外, 我们表明, 使用超模生成参数作为初始化参数, 进一步提高参数的微调性能提高性能。 因此, 超图灵敏化可以灵活和高效地将大型语言模型用于多种下游应用。