With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
翻译:随着大型语言模型(LLMs)能力的不断增强,通俗学习(ICL)已成为自然语言处理(NLP)的新范例,在自然语言处理(NLP)方面,LLMs仅根据一些培训实例加以补充的背景作出预测,这是探索ICL评估和推断LLMs能力的新趋势。在本文件中,我们的目标是调查和总结ICL的进展、挑战和今后的工作。我们首先提出ICL的正式定义,并澄清其与相关研究的关联。然后,我们组织和讨论ICL的先进技术,包括培训战略、推动战略等等。最后,我们介绍ICL的挑战,并为进一步研究提供潜在方向。我们希望我们的工作能够鼓励进行更多的研究,了解ICL如何在未来工作中开展工作并改进ICL。