Language models are pre-trained using large corpora of generic data like book corpus, common crawl and Wikipedia, which is essential for the model to understand the linguistic characteristics of the language. New studies suggest using Domain Adaptive Pre-training (DAPT) and Task-Adaptive Pre-training (TAPT) as an intermediate step before the final finetuning task. This step helps cover the target domain vocabulary and improves the model performance on the downstream task. In this work, we study the impact of training only the embedding layer on the model's performance during TAPT and task-specific finetuning. Based on our study, we propose a simple approach to make the intermediate step of TAPT for BERT-based models more efficient by performing selective pre-training of BERT layers. We show that training only the BERT embedding layer during TAPT is sufficient to adapt to the vocabulary of the target domain and achieve comparable performance. Our approach is computationally efficient, with 78\% fewer parameters trained during TAPT. The proposed embedding layer finetuning approach can also be an efficient domain adaptation technique.
翻译:语言模型经过预先培训,使用大量通用数据,如书本、通用爬行和维基百科,这对模型理解语言特征至关重要。新的研究表明,在最后的微调任务之前,作为中间步骤,使用主适应前培训(DAPT)和任务适应前培训(TAPT)作为中间步骤。这一步骤有助于覆盖目标域词汇,改进下游任务的模型性能。在这项工作中,我们只研究培训该模型在TAPT和具体任务微调期间的性能嵌入层的影响。根据我们的研究,我们建议一种简单的方法,使基于BERT的模型的TAPT中间步骤更加有效,对BERT层进行选择性的预培训。我们表明,在TAPT期间,只有BERT的嵌入层足以适应目标域词汇,并实现类似的性能。我们的方法在计算上是有效的,在TAPT期间培训的参数减少了78个。拟议的嵌入层微调整方法也可以是一种高效的域适应技术。