Large language models have led to state-of-the-art accuracies across a range of tasks. However,training large language model needs massive computing resource, as more and more open source pre-training models are available, it is worthy to study how to take full advantage of available model. We find a method to save training time and resource cost by changing the small well-trained model to large model. We initialize a larger target model from a smaller source model by copy weight values from source model and padding with zeros or small initialization values on it to make the source and target model have approximate outputs, which is valid due to block matrix multiplication and residual connection in transformer structure. We test the target model on several data sets and find it is still comparable with the source model. When we continue training the target model, the training loss can start from a smaller value.
翻译:大型语言模式在一系列任务中产生了最先进的理解,然而,培训大型语言模式需要大量计算资源,因为越来越多的开放源的预培训模式已经具备,因此值得研究如何充分利用现有模式。我们找到一种方法,通过将受过良好训练的小模式改变为大型模式来节省培训时间和资源成本。我们从一个较小的源模式开始采用一个更大的目标模式,从源模式复制重量值,在源模式上铺设零或小型初始值,使源和目标模式有近似输出,由于变压器结构中的块矩阵倍增和剩余连接而有效。我们在几个数据集上测试目标模式,发现它仍然与源模式可比。当我们继续培训目标模式时,培训损失可以从一个较小的价值开始。