Transfer learning techniques are particularly useful in NLP tasks where a sizable amount of high-quality annotated data is difficult to obtain. Current approaches directly adapt a pre-trained language model (LM) on in-domain text before fine-tuning to downstream tasks. We show that extending the vocabulary of the LM with domain-specific terms leads to further gains. To a bigger effect, we utilize structure in the unlabeled data to create auxiliary synthetic tasks, which helps the LM transfer to downstream tasks. We apply these approaches incrementally on a pre-trained Roberta-large LM and show considerable performance gain on three tasks in the IT domain: Extractive Reading Comprehension, Document Ranking and Duplicate Question Detection.
翻译:在难以获得大量高质量附加说明数据的国家劳工政策任务中,转让学习技术特别有用。目前的做法直接调整了在对下游任务进行微调之前对主页文本的事先培训语言模式(LM),然后对下游任务进行微调。我们表明,以具体领域术语扩展LM词汇将带来进一步收益。更大的效果是,我们利用无标签数据中的结构来创建辅助合成任务,帮助LM向下游任务转移。我们将这些方法逐步应用在事先培训的罗伯塔大型LM上,并显示信息技术领域三项任务(即“提取阅读综合”、文件排行和复制问题探测)的显著绩效。