Large-scale pre-trained models (PTMs) such as BERT and GPT have recently achieved great success and become a milestone in the field of artificial intelligence (AI). Owing to sophisticated pre-training objectives and huge model parameters, large-scale PTMs can effectively capture knowledge from massive labeled and unlabeled data. By storing knowledge into huge parameters and fine-tuning on specific tasks, the rich knowledge implicitly encoded in huge parameters can benefit a variety of downstream tasks, which has been extensively demonstrated via experimental verification and empirical analysis. It is now the consensus of the AI community to adopt PTMs as backbone for downstream tasks rather than learning models from scratch. In this paper, we take a deep look into the history of pre-training, especially its special relation with transfer learning and self-supervised learning, to reveal the crucial position of PTMs in the AI development spectrum. Further, we comprehensively review the latest breakthroughs of PTMs. These breakthroughs are driven by the surge of computational power and the increasing availability of data, towards four important directions: designing effective architectures, utilizing rich contexts, improving computational efficiency, and conducting interpretation and theoretical analysis. Finally, we discuss a series of open problems and research directions of PTMs, and hope our view can inspire and advance the future study of PTMs.
翻译:诸如BERT和GPT等大规模预先培训的大型模型(PTM)最近取得了巨大成功,成为人工智能领域的一个里程碑。由于先进的培训前目标和巨大的模型参数,大型PTM能够有效地从大量标签和无标签的数据中获取知识。通过将知识储存在巨大的参数和对具体任务进行微调,以巨大的参数隐含地编码的丰富知识能够有益于一系列下游任务,这些任务通过实验性核查和实证分析得到了广泛证明。现在,AI界已经达成共识,将PTM作为下游任务的支柱,而不是从零开始学习模型。在本文件中,我们深刻审视了培训前的历史,特别是培训前与转让学习和自我监督学习的特殊关系,以揭示PTM在AI发展频谱中的关键位置。此外,我们全面审查了PTM的最新突破。这些突破是由计算能力的激增和数据获取的增多驱动的,朝着四个重要方向:设计有效的结构,利用丰富的环境,改进计算效率,以及进行PTM的解读和理论性分析,最后,我们讨论了我们未来研究的希望和前景系列。