Transfer learning has revolutionized computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Fine-tuned Language Models (FitLaM), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a state-of-the-art language model. Our method significantly outperforms the state-of-the-art on five text classification tasks, reducing the error by 18-24% on the majority of datasets. We open-source our pretrained models and code to enable adoption by the community.
翻译:转移学习使计算机的视野发生了革命性的变化,但国家语言方案的现有方法仍需要从头开始根据具体任务进行修改和培训。 我们提出了精调语言模式(FitLaM),这是一个有效的转移学习方法,可以适用于国家语言方案的任何任务,并引入了对微调最先进的语言模式至关重要的技术。 我们的方法大大优于五个文本分类任务的最新水平,将大多数数据集的错误减少18-24%。 我们开发了我们经过培训的模型和代码,以使社区能够采纳。