With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples. It has been a new trend to explore ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress and challenges of ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques, including training strategies, demonstration designing strategies, as well as related analysis. Finally, we discuss the challenges of ICL and provide potential directions for further research. We hope that our work can encourage more research on uncovering how ICL works and improving ICL.
翻译:随着大型语言模型(LLMs)能力的提高,通俗学习(ICL)已成为自然语言处理(NLP)的一个新范例,LLMs仅根据一些实例加以补充的背景作出预测,这是探索ICL评价和外推LMs能力的新趋势。在本文件中,我们的目的是调查和总结ICL的进展和挑战。我们首先提出ICL的正式定义,并澄清其与相关研究的关联性。然后,我们组织和讨论先进技术,包括培训战略、示范设计战略及相关分析。最后,我们讨论ICL的挑战,并为进一步研究提供潜在方向。我们希望我们的工作能够鼓励更多研究发现ICL如何运作和改进ICL。