Language is essentially a complex, intricate system of human expressions governed by grammatical rules. It poses a significant challenge to develop capable AI algorithms for comprehending and grasping a language. As a major approach, language modeling has been widely studied for language understanding and generation in the past two decades, evolving from statistical language models to neural language models. Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks. Since researchers have found that model scaling can lead to performance improvement, they further study the scaling effect by increasing the model size to an even larger size. Interestingly, when the parameter scale exceeds a certain level, these enlarged language models not only achieve a significant performance improvement but also show some special abilities that are not present in small-scale language models. To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size. Recently, the research on LLMs has been largely advanced by both academia and industry, and a remarkable progress is the launch of ChatGPT, which has attracted widespread attention from society. The technical evolution of LLMs has been making an important impact on the entire AI community, which would revolutionize the way how we develop and use AI algorithms. In this survey, we review the recent advances of LLMs by introducing the background, key findings, and mainstream techniques. In particular, we focus on four major aspects of LLMs, namely pre-training, adaptation tuning, utilization, and capacity evaluation. Besides, we also summarize the available resources for developing LLMs and discuss the remaining issues for future directions.
翻译:----
语言本质上是一个由语法规则控制的复杂的、错综复杂的人类表达系统。为了理解和掌握语言,开发强大的智能算法是一项重大挑战。作为一种主要方法,语言建模已经在过去的二十年中被广泛研究,从基于统计的语言模型发展到基于神经网络的语言模型。最近,预训练语言模型(PLMs)被提出,通过对大规模语料库进行Transformer模型的预训练,在解决各种自然语言处理任务方面展示了强大的能力。由于研究人员发现模型扩展可以导致性能的提高,因此他们进一步研究了通过增加模型大小来影响性能的情况。有趣的是,当参数规模超过某个水平时,这些缩放后的语言模型不仅能够显著提高性能,而且还显示出一些小规模语言模型中不存在的特殊能力。为了区分参数规模的差异,研究团队为大规模语言模型(LLM)的PLMs创造了这个词。最近,LLMs的研究在学术界和工业界都有了很大的进展,而ChatGPT的推出更是引起了社会的广泛关注。LLM的技术演进对整个AI社区产生了重要的影响,将彻底改变我们开发和使用AI算法的方式。在本次综述中,我们通过介绍背景、主要发现和主流技术,回顾了LLMs的最新进展。特别是,我们着重介绍LLMs的四个主要方面,即预训练、适应性调整、利用和容量评估。此外,我们还总结了开发LLMs的可用资源,讨论了未来方向的问题。