Due to the excessive cost of large-scale language model pre-training, considerable efforts have been made to train BERT progressively -- start from an inferior but low-cost model and gradually grow the model to increase the computational complexity. Our objective is to advance the understanding of Transformer growth and discover principles that guide progressive training. First, we find that similar to network architecture search, Transformer growth also favors compound scaling. Specifically, while existing methods only conduct network growth in a single dimension, we observe that it is beneficial to use compound growth operators and balance multiple dimensions (e.g., depth, width, and input length of the model). Moreover, we explore alternative growth operators in each dimension via controlled comparison to give operator selection practical guidance. In light of our analyses, the proposed method speeds up BERT pre-training by 73.6% and 82.2% for the base and large models respectively, while achieving comparable performances
翻译:由于大规模语言模型培训前费用过高,已经为逐步培训BERT付出了相当大的努力 -- -- 从低劣但低成本的模式开始,逐步发展模型,以增加计算的复杂性。我们的目标是增进对变异器增长的了解,并发现指导渐进培训的原则。首先,我们发现与网络结构搜索相似的,变异器增长也有利于复合规模的扩大。具体地说,虽然现有方法只在一个单一层面进行网络增长,但我们认为,使用复合增长操作员和平衡多个层面(例如该模型的深度、宽度和输入长度)是有益的。此外,我们探索每个层面的替代增长操作员,通过有控制的比较,为操作员选择提供实用的指导。根据我们的分析,拟议方法使BERT的预培训分别加快了73.6%和82.2%,同时实现可比较的业绩。