In recent years, the size of pre-trained language models (PLMs) has grown by leaps and bounds. However, efficiency issues of these large-scale PLMs limit their utilization in real-world scenarios. We present a suite of cost-effective techniques for the use of PLMs to deal with the efficiency issues of pre-training, fine-tuning, and inference. (1) We introduce knowledge inheritance to accelerate the pre-training process by exploiting existing PLMs instead of training models from scratch. (2) We explore the best practice of prompt tuning with large-scale PLMs. Compared with conventional fine-tuning, prompt tuning significantly reduces the number of task-specific parameters. (3) We implement a new inference toolkit, namely InfMoE, for using large-scale PLMs with limited computational resources. Based on our cost-effective pipeline, we pre-train two models: an encoder-decoder bilingual model with 11 billion parameters (CPM-2) and its corresponding MoE version with 198 billion parameters. In our experiments, we compare CPM-2 with mT5 on downstream tasks. Experimental results show that CPM-2 has excellent general language intelligence. Moreover, we validate the efficiency of InfMoE when conducting inference of large-scale models having tens of billions of parameters on a single GPU. All source code and model parameters are available at https://github.com/TsinghuaAI/CPM.
翻译:近年来,经过培训的语文模式(PLM)规模因飞跃和界限而扩大,但是,这些大规模PLM(PLM)的效率问题限制了它们在现实世界情景中的利用。我们提出了一套使用PLM(PLM)的成本效益技术,以处理培训前、微调和推论等效率问题。 (1) 我们引入了知识继承,通过利用现有PLM(PLM)而不是从零开始的培训模式来加快培训前进程。 (2) 我们探索了与大型PLM(PLM)迅速调整的最佳做法。与常规微调相比,迅速调整大大降低了任务特定参数的数量。(3) 我们采用了一套新的推论工具,即InfMoE(InfMoE),用于使用有限计算资源的大型PLMM(PM),用于处理培训前培训前、微调和推导论等效率问题。我们引入了两种模式:一个具有110亿参数的encoder-decoder双语模式(CPM-2),以及相应的MOE版本为980亿参数。我们的实验中,我们比较了下游任务的CPM-2和MT5。实验结果显示,CPM-2的实验结果显示CPM-II的大型通用标准在进行所有通用标准的效率。