Pre-training language models (LMs) on large-scale unlabeled text data makes the model much easier to achieve exceptional downstream performance than their counterparts directly trained on the downstream tasks. In this work, we study what specific traits in the pre-training data, other than the semantics, make a pre-trained LM superior to their counterparts trained from scratch on downstream tasks. We propose to use artificially constructed datasets as the pre-training data to exclude the effect of semantics, and further control what characteristics the pre-training corpora have. By fine-tuning the pre-trained models on GLUE benchmark, we can learn how beneficial it is to transfer the knowledge from the model trained on the dataset possessing that specific trait. We define and discuss three different characteristics in the artificial dataset: 1) matching the token's uni-gram or bi-gram distribution between pre-training and downstream fine-tuning, 2) the presence of the explicit dependencies among the tokens in a sequence, 3) the length of the implicit dependencies among the tokens in a sequence. Our experiments show that the explicit dependencies in the sequences of the pre-training data are critical to the downstream performance. Our results also reveal that models achieve better downstream performance when pre-trained on a dataset with a longer range of implicit dependencies. Based on our analysis, we find that models pre-trained with artificial datasets are prone to learn spurious correlation in downstream tasks. Our work reveals that even if the LMs are not pre-trained on natural language, they still gain transferability on certain human language downstream tasks once the LMs learn to model the token dependencies in the sequences. This result helps us understand the exceptional transferability of pre-trained LMs.
翻译:有关大规模无标签文本数据的培训前语言模型(LMS)使得该模型比在下游任务方面直接培训的对应人员更容易实现特殊的下游业绩。在这项工作中,我们研究训练前数据中除语义外的具体特点,使培训前语言模型优于在下游任务方面受过培训的对应人员。我们提议使用人工制造的数据集作为培训前数据,以排除语义的影响,并进一步控制培训前下游任务的特点。通过在GLUE基准上微调经过培训的模型,我们就能了解从经过培训的模型中转让具有特殊特性的知识是多么有益。我们定义和讨论人工数据集中的三种不同特点:1) 将预培训前任务与下游任务之间的单方或双方分布相匹配,2) 将培训前的信号模型存在明显的依赖性,3) 培训前下游任务之间的隐性依赖性关系仍然持续到序列中。我们的实验表明,在下游数据集的排序中,这种明显依赖性关系有助于我们从下游数据的排序中学习数据。