We propose pre-finetuning, an additional large-scale learning stage between language model pre-training and fine-tuning. Pre-finetuning is massively multi-task learning (around 50 datasets, over 4.8 million total labeled examples), and is designed to encourage learning of representations that generalize better to many different tasks. We show that pre-finetuning consistently improves performance for pretrained discriminators (e.g.~RoBERTa) and generation models (e.g.~BART) on a wide range of tasks (sentence prediction, commonsense reasoning, MRC, etc.), while also significantly improving sample efficiency during fine-tuning. We also show that large-scale multi-tasking is crucial; pre-finetuning can hurt performance when few tasks are used up until a critical point (usually above 15) after which performance improves linearly in the number of tasks.
翻译:我们提议在语言模式培训前和微调之间再增加一个大规模学习阶段,即预调前阶段。预调是大规模多任务学习(大约50个数据集,超过480万个总标签例子),目的是鼓励学习更能概括到许多不同任务的表述。我们表明,预调前不断改善预先培训的受歧视者(例如:ROBERTA)和代代模式(例如:~BART)在一系列广泛任务(感知预测、常识推理、MRC等)方面的绩效,同时大幅提高微调期间的样本效率。 我们还表明,大规模多任务执行至关重要;预调前会伤害到关键点(通常超过15个)之前的任务的绩效,随后业绩会直线改善任务数量的绩效。