Neural Transfer Learning (TL) is becoming ubiquitous in Natural Language Processing (NLP), thanks to its high performance on many tasks, especially in low-resourced scenarios. Notably, TL is widely used for neural domain adaptation to transfer valuable knowledge from high-resource to low-resource domains. In the standard fine-tuning scheme of TL, a model is initially pre-trained on a source domain and subsequently fine-tuned on a target domain and, therefore, source and target domains are trained using the same architecture. In this paper, we show through interpretation methods that such scheme, despite its efficiency, is suffering from a main limitation. Indeed, although capable of adapting to new domains, pre-trained neurons struggle with learning certain patterns that are specific to the target domain. Moreover, we shed light on the hidden negative transfer occurring despite the high relatedness between source and target domains, which may mitigate the final gain brought by transfer learning. To address these problems, we propose to augment the pre-trained model with normalised, weighted and randomly initialised units that foster a better adaptation while maintaining the valuable source knowledge. We show that our approach exhibits significant improvements to the standard fine-tuning scheme for neural domain adaptation from the news domain to the social media domain on four NLP tasks: part-of-speech tagging, chunking, named entity recognition and morphosyntactic tagging.
翻译:在自然语言处理(NLP)中,由于在很多任务上表现优异,特别是在资源匮乏的情景下,TL正在变得无处不在。值得注意的是,TL被广泛用于神经领域改造,以便从高资源领域向低资源领域转让宝贵的知识。在TL的标准微调计划中,模型最初在源领域预先培训,随后对目标领域进行微调,因此,源和目标领域也使用相同的结构进行了培训。在本文中,我们通过解释方法表明,尽管这种计划效率很高,但受到主要限制。事实上,尽管能够适应新的领域,但事先训练的神经科人员在学习目标领域特有的某些模式时,却在进行神经领域改造。此外,我们揭示了尽管源领域和目标领域之间有着高度关联,但正在发生的隐藏的负面转移,这可能会减轻通过转移学习带来的最终收益。为了解决这些问题,我们提议以标准化、加权和随机性地为对象单位来扩大经过培训的模型,促进更好的适应,同时保持宝贵的源知识。我们表明,我们的方法在适应新领域域域标准实体的调整方案方面进行了重大改进,即按区域级级级级级标准调整,即按级标准调整。