Pre-trained language models, such as BERT, have achieved significant accuracy gain in many natural language processing tasks. Despite its effectiveness, the huge number of parameters makes training a BERT model computationally very challenging. In this paper, we propose an efficient multi-stage layerwise training (MSLT) approach to reduce the training time of BERT. We decompose the whole training process into several stages. The training is started from a small model with only a few encoder layers and we gradually increase the depth of the model by adding new encoder layers. At each stage, we only train the top (near the output layer) few encoder layers which are newly added. The parameters of the other layers which have been trained in the previous stages will not be updated in the current stage. In BERT training, the backward computation is much more time-consuming than the forward computation, especially in the distributed training setting in which the backward computation time further includes the communication time for gradient synchronization. In the proposed training strategy, only top few layers participate in backward computation, while most layers only participate in forward computation. Hence both the computation and communication efficiencies are greatly improved. Experimental results show that the proposed method can achieve more than 110% training speedup without significant performance degradation.
翻译:事先培训的语言模型,如BERT,在许多自然语言处理任务中取得了显著的精准性提高。尽管其效率很高,但大量的参数使得培训BERT模型具有很高的计算难度。在本文件中,我们建议采用高效的多阶段培训(MSLT)方法来缩短BERT的培训时间。我们将整个培训过程分解成几个阶段。培训从一个很小的模型开始,只有几个编码器层,我们通过添加新的编码器层逐步增加模型的深度。在每一个阶段,我们仅培训顶层(靠近产出层),而新添加的编码器层很少。以前培训过的其他层的参数在目前阶段不会更新。在BERT培训中,落后的计算比前期计算要耗费更多时间,特别是在分布式的培训环境中,后期计算时间进一步包括加速的通信时间。在拟议的培训战略中,只有顶层很少的层参与后向计算,而大多数层仅参与前期计算。因此,前期计算和通信效率都大大改进了。实验性结果显示,拟议的方法可以实现比前期更高的速度。