Current literature demonstrates that Large Language Models (LLMs) are great few-shot learners, and prompting significantly increases their performance on a range of downstream tasks in a few-shot learning setting. An attempt to automate human-led prompting followed, with some progress achieved. In particular, subsequent work demonstrates automation can outperform fine-tuning in certain K-shot learning scenarios. In this paper, we revisit techniques for automated prompting on six different downstream tasks and a larger range of K-shot learning settings. We find that automated prompting does not consistently outperform simple manual prompts. Our work suggests that, in addition to fine-tuning, manual prompts should be used as a baseline in this line of research.
翻译:当前文献表明,大型语言模型是优秀的少样本学习者,而提示可以显著提高在少样本学习设置下大型语言模型在各种下游任务上的性能。随后,试图自动化人类引导提示的方法,取得了一定的进展。特别是,随后的工作表明,在某些K-shot学习场景下,自动化可以超越微调。在本文中,我们重新审视了六个不同的下游任务和更多的K-shot学习设置中的自动提示技术。我们发现,自动提示并不总是优于简单的手动提示。我们的工作表明,在这一研究方向中,除了微调之外,手动提示应作为基准。