Recent expeditious developments in deep learning algorithms, distributed training, and even hardware design for large models have enabled training extreme-scale models, say GPT-3 and Switch Transformer possessing hundreds of billions or even trillions of parameters. However, under limited resources, extreme-scale model training that requires enormous amounts of computes and memory footprint suffers from frustratingly low efficiency in model convergence. In this paper, we propose a simple training strategy called "Pseudo-to-Real" for high-memory-footprint-required large models. Pseudo-to-Real is compatible with large models with architecture of sequential layers. We demonstrate a practice of pretraining unprecedented 10-trillion-parameter model, an order of magnitude larger than the state-of-the-art, on solely 512 GPUs within 10 days. Besides demonstrating the application of Pseudo-to-Real, we also provide a technique, Granular CPU offloading, to manage CPU memory for training large model and maintain high GPU utilities. Fast training of extreme-scale models on a decent amount of resources can bring much smaller carbon footprint and contribute to greener AI.
翻译:最近深层学习算法、分布式培训,甚至大型模型硬件设计方面的快速发展,使培训极端规模模型(如GPT-3和开关变异器,拥有数千亿甚至数万亿参数)成为可能,然而,在资源有限的情况下,需要大量计算和记忆足迹的极端规模模型培训,由于模型趋同效率低得令人沮丧。在本文件中,我们提议了一个称为“Psuedo-Real”的简单培训战略,用于高模脚要求的大型模型。Pseudo-Real与具有相继层结构的大型模型相容。我们展示了一种前所未见的10亿分数模型的预培训做法,这种模型的规模大于最先进的512GPP,在10天内仅512GPs。除了演示Pseudo-Real的应用外,我们还提供一种技术,即Granal-CPU卸载技术,用于培训大型模型,并保持高基PU公用事业。我们展示了极规模模型在相当的资源量上可以带来小得多的碳足迹,有助于绿色AI。