Task-adaptive pre-training (TAPT) and Self-training (ST) have emerged as the major semi-supervised approaches to improve natural language understanding (NLU) tasks with massive amount of unlabeled data. However, it's unclear whether they learn similar representations or they can be effectively combined. In this paper, we show that TAPT and ST can be complementary with simple TFS protocol by following TAPT -> Finetuning -> Self-training (TFS) process. Experimental results show that TFS protocol can effectively utilize unlabeled data to achieve strong combined gains consistently across six datasets covering sentiment classification, paraphrase identification, natural language inference, named entity recognition and dialogue slot classification. We investigate various semi-supervised settings and consistently show that gains from TAPT and ST can be strongly additive by following TFS procedure. We hope that TFS could serve as an important semi-supervised baseline for future NLP studies.
翻译:任务调整前培训(TAPT)和自我培训(ST)是提高自然语言理解(NLU)任务的主要半监督方法,具有大量未贴标签的数据。然而,尚不清楚他们是否学会了类似的表述方式,还是可以有效地将其结合起来。在本文中,我们表明TAPT和ST可以通过采用TAPT -- > 微调 -- > 自我培训(TFS)程序来补充简单的TFS协议。实验结果显示,TFS协议可以有效地利用未贴标签的数据,在六个数据集之间实现强有力的综合收益,这六个数据集包括情绪分类、参数识别、自然语言推论、实体识别和对话时间档分类。我们调查了不同的半监督环境,并一致表明TAPT和ST的成果可以通过TFS程序得到强大的补充。我们希望TFS可以作为未来NLP研究的重要的半监督基线。