Language model pre-training has proven to be useful in many language understanding tasks. In this paper, we investigate whether it is still helpful to add the self-training method in the pre-training step and the fine-tuning step. Towards this goal, we propose a learning framework that making best use of the unlabel data on the low-resource and high-resource labeled dataset. In industry NLP applications, we have large amounts of data produced by users or customers. Our learning framework is based on this large amounts of unlabel data. First, We use the model fine-tuned on manually labeled dataset to predict pseudo labels for the user-generated unlabeled data. Then we use the pseudo labels to supervise the task-specific training on the large amounts of user-generated data. We consider this task-specific training step on pseudo labels as a pre-training step for the next fine-tuning step. At last, we fine-tune on the manually labeled dataset upon the pre-trained model. In this work, we first empirically show that our method is able to solidly improve the performance by 3.6%, when the manually labeled fine-tuning dataset is relatively small. Then we also show that our method still is able to improve the performance further by 0.2%, when the manually labeled fine-tuning dataset is relatively large enough. We argue that our method make the best use of the unlabel data, which is superior to either pre-training or self-training alone.
翻译:语言模型预培训已被证明在许多语言理解任务中非常有用。 在本文中, 我们调查在培训前步骤和微调步骤中添加自我培训方法是否仍然有用。 为了实现这一目标, 我们提议了一个学习框架, 最佳利用资源低和资源高标签数据集的未标签数据。 在工业NLP 应用程序中, 我们拥有大量由用户或客户制作的数据。 我们的学习框架以大量未标签数据为基础。 首先, 我们使用人工标签数据集上经过微调的模型来预测用户生成的无标签数据的假标签。 然后, 我们使用假标签来监督关于大量用户生成数据的具体任务培训。 我们认为, 关于伪标签的这一具体任务培训步骤是下一个微调步骤的预培训步骤。 最后, 我们微调了在培训前模型上手动标签的最佳数据集。 在这项工作中, 我们第一次实验性地显示, 我们的方法仍然能够以3. 6 % 为基础, 当手动标签相对精确地调整数据时, 当我们手动标签的大幅调整数据时, 我们的性能的性能比重。