Training LLMs relies on distributed implementations using multiple GPUs to compute gradients in parallel with sharded optimizers. However, synchronizing gradients in data parallel setups introduces communication overhead that grows with the number of workers, limiting parallelization efficiency. Local optimization algorithms reduce communications but incur high memory costs as they prevent optimizer state sharding, hindering scalability. To address this, we propose \textbf{AC}cumulate while \textbf{CO}mmunicate (ACCO), a memory-efficient optimization algorithm for distributed LLM training. By synchronizing delayed gradients while computing new ones, ACCO reduces GPU idle time and supports heterogeneous hardware. To mitigate the convergence issues caused by delayed updates, we introduce a novel technique ensuring training dynamics align with standard distributed optimization. Compared to ZeRO-1, our approach is significantly faster and scales effectively across heterogeneous hardware.
翻译:大语言模型训练依赖于基于多GPU的分布式实现,通过分片优化器并行计算梯度。然而,在数据并行配置中同步梯度会引入随工作节点数量增长的通信开销,从而限制并行化效率。局部优化算法虽能减少通信,但因阻碍优化器状态分片而产生高昂内存成本,制约了可扩展性。为解决这一问题,我们提出\textbf{边通信边累积}优化算法(ACCO),这是一种面向分布式大语言模型训练的内存高效优化算法。通过在计算新梯度的同时同步延迟梯度,ACCO减少了GPU空闲时间并支持异构硬件。为缓解延迟更新导致的收敛问题,我们引入了一种创新技术,确保训练动态与标准分布式优化保持一致。与ZeRO-1相比,本方法在异构硬件上实现了显著加速和高效扩展。