We introduce MetaICL (Meta-training for In-Context Learning), a new meta-training framework for few-shot learning where a pretrained language model is tuned to do in-context learning on a large set of training tasks. This meta-training enables the model to more effectively learn a new task in context at test time, by simply conditioning on a few training examples with no parameter updates or task-specific templates. We experiment on a large, diverse collection of tasks consisting of 142 NLP datasets including classification, question answering, natural language inference, paraphrase detection and more, across seven different meta-training/target splits. MetaICL outperforms a range of baselines including in-context learning without meta-training and multi-task learning followed by zero-shot transfer. We find that the gains are particularly significant for target tasks that have domain shifts from the meta-training tasks, and that using a diverse set of the meta-training tasks is key to improvements. We also show that MetaICL approaches (and sometimes beats) the performance of models fully finetuned on the target task, and outperforms much bigger models with nearly 8x parameters. Finally, we show that MetaICL is complementary to human-written instructions, and the best performance can be achieved by combining both approaches.
翻译:我们引入了MetaICL(Meta-training for Intextlearning)(Meta-traination for Intextlearning),这是一个用于微小学习的新的元培训框架,通过对预先培训的语言模式进行调整,在大量培训任务中进行文字化学习。这种元培训使该模式能够在测试时更有效地学习新的任务,只要以几个培训范例为条件,而没有参数更新或任务特定模板即可。我们试验了由142个NLP数据集组成的庞大而多样的任务汇编,其中包括分类、问答、自然语言推论、语音探测和七个不同的元培训/目标分割。MetaICL超越了一系列基线,包括不进行元培训和多任务化学习,然后是零点转移。我们发现,这些收益对于目标任务来说特别重要,而没有参数更新,使用一套多样化的元培训任务是改进的关键。我们还表明,MetICL(有时击败)方法能够完全调整模型在目标任务上的表现,超越一系列基准,包括文中学习,没有进行元培训和多课学习,而没有零点转移。 我们发现,将模型与最接近8个模型结合起来。