Transfer learning has fundamentally changed the landscape of natural language processing (NLP) research. Many existing state-of-the-art models are first pre-trained on a large text corpus and then fine-tuned on downstream tasks. However, due to limited data resources from downstream tasks and the extremely large capacity of pre-trained models, aggressive fine-tuning often causes the adapted model to overfit the data of downstream tasks and forget the knowledge of the pre-trained model. To address the above issue in a more principled manner, we propose a new computational framework for robust and efficient fine-tuning for pre-trained language models. Specifically, our proposed framework contains two important ingredients: 1. Smoothness-inducing regularization, which effectively manages the capacity of the model; 2. Bregman proximal point optimization, which is a class of trust-region methods and can prevent knowledge forgetting. Our experiments demonstrate that our proposed method achieves the state-of-the-art performance on multiple NLP benchmarks.
翻译:许多现有最先进的模式首先对大量文本进行预先培训,然后对下游任务进行微调,然而,由于下游任务的数据资源有限,而且经过培训的模型能力极强,积极的微调往往导致经调整的模式过分适应下游任务的数据,并忘记了受过培训的模型的知识。为了以更有原则的方式解决上述问题,我们提议一个新的计算框架,对经过培训的语文模型进行有力和有效的微调。具体地说,我们提议的框架包含两个重要要素:1. 平稳的调整,有效管理模型的能力;2. 布雷格曼准点优化,这是一个信任区域方法的类别,可以防止知识的遗忘。我们的实验表明,我们提出的方法在多项国家语言模型基准上达到了最先进的业绩。