Recent works have demonstrated great success in training high-capacity autoregressive language models (GPT, GPT-2, GPT-3) on a huge amount of unlabeled text corpus for text generation. Despite showing great results, autoregressive models are facing a growing training instability issue. Our study on GPT-2 models (117M and 1.5B parameters) show that larger model sizes, sequence lengths, batch sizes, and learning rates would lead to lower training stability and increasing divergence risks. To avoid divergence and achieve better generalization performance, one has to train with smaller batch sizes and learning rates, which leads to worse training efficiency and longer training time. To overcome this stability-efficiency dilemma, we present a study of a curriculum learning-based approach, which helps improves the pre-training convergence speed of autoregressive models. More importantly, we find that curriculum learning, as a regularization method, exerts a gradient variance reduction effect and enables to train autoregressive models with much larger batch sizes and learning rates without training instability, further improving the training speed. Our evaluations demonstrate that curriculum learning enables training GPT-2 models with 8x larger batch size and 4x larger learning rate, whereas the baseline approach struggles with training divergence. To achieve the same validation perplexity targets during pre-training, curriculum learning reduces the required number of tokens and wall clock time by up to 61% and 49%, respectively. To achieve the same or better zero-shot WikiText-103/LAMBADA evaluation results at the end of pre-training, curriculum learning reduces the required number of tokens and wall clock time by up to 54% and 70%, respectively.
翻译:最近的工作显示,在培训能力强的自动递减语言模型(GPT、GPT-2、GPT-2、GPT-3)方面,在培训大量无标签的文本材料以生成文本方面,最近的工作取得了巨大成功。尽管取得了巨大成果,但自动递减模式正面临日益严重的培训不稳定问题。我们对GPT-2模型(117M和1.5B参数)的研究显示,较大的模型规模、序列长度、批量规模和学习率将降低培训稳定性和增加差异风险。为避免差异并实现更好的概括化业绩,必须用较小的批量规模和学习率来培训,从而导致培训效率下降,培训时间更长。为克服这种稳定-效率两难局面,我们提出对课程学习基础方法的研究,帮助提高自动递增模式培训前的趋同速度(117MT),同时通过学习标准(49-2010),通过学习标准(49-2010年),通过学习标准(49-2010年),学习标准(49-2010年)。