Systems for training massive deep learning models (billions of parameters) today assume and require specialized "hyper-clusters": hundreds or thousands of GPUs wired with specialized high-bandwidth interconnects such as NV-Link and Infiniband. Besides being expensive, such dependence on hyper-clusters and custom high-speed inter-connects limits the size of such clusters, creating (a) scalability limits on job parallelism; (b) resource fragmentation across hyper-clusters. In this paper, we present Varuna, a new system that enables training massive deep learning models on commodity networking. Varuna makes thrifty use of networking resources and automatically configures the user's training job to efficiently use any given set of resources. Therefore, Varuna is able to leverage "low-priority" VMs that cost about 5x cheaper than dedicated GPUs, thus significantly reducing the cost of training massive models. We demonstrate the efficacy of Varuna by training massive models, including a 200 billion parameter model, on 5x cheaper "spot VMs", while maintaining high training throughput. Varuna improves end-to-end training time by up to 18x compared to other model-parallel approaches and up to 26% compared to other pipeline parallel approaches. The code for Varuna is available at https://github.com/microsoft/varuna.
翻译:用于培训大规模深层次学习模式(数十亿参数)的系统如今假定并需要专门的“超级集群”:数百或数千个GPU,与专门的高带宽互连,如NV-Link和Infiniband。除了费用昂贵外,如此依赖超级集群和定制高速互连限制了这类集群的规模,从而导致(a) 工作平行模式的可扩展性限制;(b) 超级集群的资源分散。在本文件中,我们介绍瓦鲁纳(Varuna),这是一个能够对大规模商品网络的深层学习模式进行培训的新系统。Varuna利用了网络资源快速,并自动配置了用户的培训工作,以便有效地使用任何特定的资源。因此,Varuna能够利用“低优先”VMs,其成本比专用的GPUP约低5x便宜,从而大大降低了培训大规模模型的成本。我们通过培训大规模模型,包括2 000亿个低价“Spet VMs”的参数模型来展示VMS的功效,同时保持高水平培训。 Varuna使用高调的网络资源, Varuna改进了用户培训方式,在VDus-to-endubs-endro-de-de-deal 方法上比Revulal 18xxxxxxx。