Deep learning based models have dominated the current landscape of production recommender systems. Furthermore, recent years have witnessed an exponential growth of the model scale--from Google's 2016 model with 1 billion parameters to the latest Facebook's model with 12 trillion parameters. Significant quality boost has come with each jump of the model capacity, which makes us believe the era of 100 trillion parameters is around the corner. However, the training of such models is challenging even within industrial scale data centers. This difficulty is inherited from the staggering heterogeneity of the training computation--the model's embedding layer could include more than 99.99% of the total model size, which is extremely memory-intensive; while the rest neural network is increasingly computation-intensive. To support the training of such huge models, an efficient distributed training system is in urgent need. In this paper, we resolve this challenge by careful co-design of both the optimization algorithm and the distributed system architecture. Specifically, in order to ensure both the training efficiency and the training accuracy, we design a novel hybrid training algorithm, where the embedding layer and the dense neural network are handled by different synchronization mechanisms; then we build a system called Persia (short for parallel recommendation training system with hybrid acceleration) to support this hybrid training algorithm. Both theoretical demonstration and empirical study up to 100 trillion parameters have conducted to justified the system design and implementation of Persia. We make Persia publicly available (at https://github.com/PersiaML/Persia) so that anyone would be able to easily train a recommender model at the scale of 100 trillion parameters.
翻译:深层次的学习模型主导了当前生产建议系统。 此外,近年来,从谷歌2016年模型(含10亿参数)到脸书最新模型(含12万亿美元参数)的模型比例大幅增长。 模型能力的每次跳跃都带来质素的显著提升,这使我们相信100万亿参数的时代即将到来。 然而,即使是在工业规模的数据中心内部,这类模型的培训也具有挑战性。这种困难来自于培训计算-模型的简单嵌入层的惊人异质性,其中可能包括总模型规模的99.99%以上,这是非常记忆密集型的;而休息神经网络则越来越需要计算密集。为了支持这种巨大的模型的培训,一个高效分布式的培训系统是迫切需要的。在本文件中,我们通过谨慎地共同设计优化算法和分布式系统架构来解决这一挑战。 具体地说,为了确保培训模式的效率和培训准确性,我们设计了一个新型的混合培训算法,在这个系统中,嵌入层层和密度稠密的神经网络由不同的同步机制处理; 然后,我们构建一个名为双级的模型设计系统,用来进行快速的模型设计。