This work proposes RaNNC (Rapid Neural Network Connector) as middleware for automatic hybrid parallelism. In recent deep learning research, as exemplified by T5 and GPT-3, the size of neural network models continues to grow. Since such models do not fit into the memory of accelerator devices, they need to be partitioned by model parallelism techniques. Moreover, to accelerate training for huge training data, we need a combination of model and data parallelisms, i.e., hybrid parallelism. Given a model description for PyTorch without any specification for model parallelism, RaNNC automatically partitions the model into a set of subcomponents so that (1) each subcomponent fits a device memory and (2) a high training throughput for pipeline parallelism is achieved by balancing the computation times of the subcomponents. In our experiments, we compared RaNNC with two popular frameworks, Megatron-LM (hybrid parallelism) and GPipe (originally proposed for model parallelism, but a version allowing hybrid parallelism also exists), for training models with increasingly greater numbers of parameters. In the pre-training of enlarged BERT models, RaNNC successfully trained models five times larger than those Megatron-LM could, and RaNNC's training throughputs were comparable to Megatron-LM's when pre-training the same models. RaNNC also achieved better training throughputs than GPipe on both the enlarged BERT model pre-training (GPipe with hybrid parallelism) and the enlarged ResNet models (GPipe with model parallelism) in all of the settings we tried. These results are remarkable, since RaNNC automatically partitions models without any modification to their descriptions; Megatron-LM and GPipe require users to manually rewrite the models' descriptions.
翻译:这项工作建议 RaNNC ( Rapid Neal 网络连接器) 作为自动混合平行的中间软件。 在最近的深层学习研究中, 如 T5 和 GPT-3 所示, 神经网络模型的规模继续扩大。 由于这些模型不适应加速器设备的记忆, 需要用模型平行技术加以分割 。 此外, 为了加速大量培训数据的培训, 我们需要将模型和数据平行( 即 混合平行) 结合起来。 鉴于PyTorch 的模型描述没有为模型平行提供任何规格, RaNNC 自动将模型分割成一组子组件, 这样:(1) 每个子组件都适合设备记忆和(2) 管道平行模型的高培训量通过平衡子组件的计算时间来实现 。 在我们的实验中, 我们用两个流行的框架, Megaten- LM ( 杂交平行平行) 和 GIPPI ( 原始提议用于模型平行化, 但也存在一种允许混合平行平行的版本), 用于培训模型, 其参数越多, 越多越多。 在升级的模型中, RaNER 的用户, 需要通过所有越多的模型。