We present an efficient method of pretraining large-scale autoencoding language models using training signals generated by an auxiliary model. Originated in ELECTRA, this training strategy has demonstrated sample-efficiency to pretrain models at the scale of hundreds of millions of parameters. In this work, we conduct a comprehensive empirical study, and propose a recipe, namely "Model generated dEnoising TRaining Objective" (METRO), which incorporates some of the best modeling techniques developed recently to speed up, stabilize, and enhance pretrained language models without compromising model effectiveness. The resultant models, METRO-LM, consisting of up to 5.4 billion parameters, achieve new state-of-the-art on the GLUE, SuperGLUE, and SQuAD benchmarks. More importantly, METRO-LM are efficient in that they often outperform previous large models with significantly smaller model sizes and lower pretraining cost.
翻译:我们提出了一个有效的方法,利用辅助模型产生的培训信号对大规模自动编码语言模型进行预培训。这一培训战略起源于ELECTRA,它展示了以数亿参数规模对模型进行预培训的样本效率。在这项工作中,我们进行了一项全面的经验研究,并提出了一种配方,即“生成的模型 dEnoizing Training oblication”(METRO),它包含了最近开发的一些最佳模型技术,以加速、稳定和加强预先培训的语言模型,同时又不损害模型的效能。由此产生的模型,METRO-LM,由多达54亿参数组成,在GLUE、SuperGLUE和SQUAD基准上达到新的最新水平。更重要的是,METRO-LM效率很高,因为它们往往比以前的大模型更小得多,培训前成本也更低。