The advent of large-scale pre-trained language models has contributed greatly to the recent progress in natural language processing. Many state-of-the-art language models are first trained on a large text corpus and then fine-tuned on downstream tasks. Despite its recent success and wide adoption, fine-tuning a pre-trained language model often suffers from overfitting, which leads to poor generalizability due to the extremely high complexity of the model and the limited training samples from downstream tasks. To address this problem, we propose a novel and effective fine-tuning framework, named Layerwise Noise Stability Regularization (LNSR). Specifically, we propose to inject the standard Gaussian noise or In-manifold noise and regularize hidden representations of the fine-tuned model. We first provide theoretical analyses to support the efficacy of our method. We then demonstrate the advantages of the proposed method over other state-of-the-art algorithms including L2-SP, Mixout and SMART. While these previous works only verify the effectiveness of their methods on relatively simple text classification tasks, we also verify the effectiveness of our method on question answering tasks, where the target problem is much more difficult and more training examples are available. Furthermore, extensive experimental results indicate that the proposed algorithm can not only enhance the in-domain performance of the language models but also improve the domain generalization performance on out-of-domain data.
翻译:许多最先进的语文模式首先在大型文字材料库中接受培训,然后对下游任务进行微调。尽管最近取得了成功和广泛采用,但微调一个经过预先培训的语言模式往往有过分的缺陷,导致该模式极为复杂,来自下游任务的培训样本有限,导致该模式的普及性较差。为了解决这一问题,我们提议了一个新颖而有效的微调框架,名为Dire Wise Nisise Stability Recilization(LNSR)。具体地说,我们建议输入标准高斯噪音或内置噪音,并规范微调模式的隐蔽表述。我们首先提供理论分析,以支持我们的方法的功效。然后我们展示拟议方法优于其他最先进的算法,包括L2-SP、Mixout和SMART。我们先前的这些工程只能核实其相对简单的文字分类任务方法的有效性,但我们还核实了我们回答问题的方法的有效性,在这些方面,目标模型的实验性效果并非很大,而是提高了一般的绩效模型。此外,在一般范围上提出的数据也能够显示改进。