Training deep learning (DL) models that do not fit into the memory of a single GPU is a vexed process, forcing users to procure multiple GPUs to adopt model-parallel execution. Unfortunately, sequential dependencies in neural architectures often block efficient multi-device training, leading to suboptimal performance. We present 'model spilling', a technique aimed at models such as Transformers and CNNs to move groups of layers, or shards, between DRAM and GPU memory, thus enabling arbitrarily large models to be trained even on just one GPU. We then present a set of novel techniques leveraging spilling to raise efficiency for multi-model training workloads such as model selection: a new hybrid of task- and model-parallelism, a new shard scheduling heuristic, and 'double buffering' to hide latency. We prototype our ideas into a system we call HYDRA to support seamless single-model and multi-model training of large DL models. Experiments with real benchmark workloads show that HYDRA is over 7x faster than regular model parallelism and over 50% faster than state-of-the-art industrial tools for pipeline parallelism.
翻译:与单个 GPU 记忆中不相适应的深层次学习( DL) 培训模式是一个繁琐的过程,迫使用户购买多个 GPU 以采用模型平行执行。 不幸的是,神经结构的相继依赖性往往阻碍高效多功能培训,导致不优化的性能。 我们提出“ 模型溢出 ”, 一种针对变异器和CNNs等模型的技术, 以移动DRAM 和 GPU 记忆之间的层组或碎片, 从而使任意的大型模型即使只用一个 GPU 也能接受培训。 然后我们提出一套利用溢出效应提高多模式培训工作量效率的新技术, 如模型选择: 任务和模型平行性的新组合和模型平行性, 新的硬性排时态, 和“ 双重缓冲性缓冲” 以隐藏隐蔽性。 我们将我们称之为 HYDRA 的理念建成一个系统, 以支持无缝的单一模型和大型 DL 模型的多模型培训。 实际基准工作量实验显示 HYDRA 比常规模型平行平行性超7x 和超过50 % 的平行性平行性工业工具 。