With the rapid adoption of large language models (LLMs) in recommendation systems, the computational and communication bottlenecks caused by their massive parameter sizes and large data volumes have become increasingly prominent. This paper systematically investigates two classes of optimization methods-model parallelism and data parallelism-for distributed training of LLMs in recommendation scenarios. For model parallelism, we implement both tensor parallelism and pipeline parallelism, and introduce an adaptive load-balancing mechanism to reduce cross-device communication overhead. For data parallelism, we compare synchronous and asynchronous modes, combining gradient compression and sparsification techniques with an efficient aggregation communication framework to significantly improve bandwidth utilization. Experiments conducted on a real-world recommendation dataset in a simulated service environment demonstrate that our proposed hybrid parallelism scheme increases training throughput by over 30% and improves resource utilization by approximately 20% compared to traditional single-mode parallelism, while maintaining strong scalability and robustness. Finally, we discuss trade-offs among different parallel strategies in online deployment and outline future directions involving heterogeneous hardware integration and automated scheduling technologies.
翻译:随着大语言模型在推荐系统中的快速应用,其海量参数量与大规模数据带来的计算与通信瓶颈日益凸显。本文系统研究了推荐场景下大语言模型分布式训练的两类优化方法——模型并行与数据并行。针对模型并行,我们实现了张量并行与流水线并行,并引入自适应负载均衡机制以降低跨设备通信开销。对于数据并行,我们对比了同步与异步模式,将梯度压缩与稀疏化技术与高效聚合通信框架相结合,显著提升了带宽利用率。在模拟服务环境下基于真实推荐数据集进行的实验表明,相较于传统单模式并行方案,我们提出的混合并行方案使训练吞吐量提升超过30%,资源利用率提高约20%,同时保持了良好的可扩展性与鲁棒性。最后,我们探讨了在线部署中不同并行策略的权衡关系,并展望了异构硬件集成与自动化调度技术等未来研究方向。