How can model designers turn task instructions into effective prompts for language models? Backed by extensive empirical analysis on GPT3, we observe important features for successful instructional prompts, and propose several reframing techniques for model designers to create such prompts. For example, a complex task can be decomposed into multiple simpler tasks. We experiment over 12 NLP tasks across 6 diverse categories (question generation, classification, etc.). Our results show that reframing improves few-shot learning performance by 14\% while reducing sample complexity over existing few-shot baselines. The performance gains are particularly important on large language models, such as GPT3 where tuning models or prompts on large datasets is not feasible. Furthermore, we observe that such gains are not limited to GPT3; the reframed tasks remain superior over raw instructions across different model architectures, underscoring the cross-model generality of these guidelines. We hope these empirical-driven techniques will pave way for more effective ways to prompt LMs in future.
翻译:模型设计师如何能将任务指示转化为语言模型的有效提示?在对GPT3的广泛经验分析的支持下,我们观察到成功指导提示的重要特征,并为模型设计师提出了几种重新配置技术,以创建这样的提示。例如,复杂的任务可以分解成多个更简单的任务。我们试验了6个不同类别(问题生成、分类等)的12项国家语言规划任务。我们的结果显示,重新配置14项提高了微小的学习成绩,同时降低了现有微小基准的样本复杂性。绩效收益对于大型语言模型来说尤其重要,例如GPT3, 其中调制模型或大数据集的提示不可行。此外,我们注意到,这些收益并不局限于GPT3;重设任务仍然优于不同模型结构的原始指示,强调这些指南的跨模范通用性。我们希望这些经验驱动技术能够为今后激励LMS的更有效方法铺平道路。