The rapid scaling of large language models (LLMs) has unveiled critical limitations in current hardware architectures, including constraints in memory capacity, computational efficiency, and interconnection bandwidth. DeepSeek-V3, trained on 2,048 NVIDIA H800 GPUs, demonstrates how hardware-aware model co-design can effectively address these challenges, enabling cost-efficient training and inference at scale. This paper presents an in-depth analysis of the DeepSeek-V3/R1 model architecture and its AI infrastructure, highlighting key innovations such as Multi-head Latent Attention (MLA) for enhanced memory efficiency, Mixture of Experts (MoE) architectures for optimized computation-communication trade-offs, FP8 mixed-precision training to unlock the full potential of hardware capabilities, and a Multi-Plane Network Topology to minimize cluster-level network overhead. Building on the hardware bottlenecks encountered during DeepSeek-V3's development, we engage in a broader discussion with academic and industry peers on potential future hardware directions, including precise low-precision computation units, scale-up and scale-out convergence, and innovations in low-latency communication fabrics. These insights underscore the critical role of hardware and model co-design in meeting the escalating demands of AI workloads, offering a practical blueprint for innovation in next-generation AI systems.
翻译:大型语言模型(LLMs)的快速扩展揭示了当前硬件架构的关键局限,包括内存容量、计算效率和互连带宽等方面的约束。在2,048块NVIDIA H800 GPU上训练的DeepSeek-V3,展示了硬件感知的模型协同设计如何有效应对这些挑战,实现大规模、高性价比的训练与推理。本文深入分析了DeepSeek-V3/R1模型架构及其AI基础设施,重点阐述了多项关键创新:提升内存效率的多头潜在注意力(MLA)、优化计算-通信权衡的专家混合(MoE)架构、释放硬件全部潜力的FP8混合精度训练,以及最小化集群级网络开销的多平面网络拓扑。基于DeepSeek-V3开发过程中遇到的硬件瓶颈,我们与学术界和工业界同仁展开了更广泛的讨论,探讨了未来硬件的潜在发展方向,包括精确的低精度计算单元、纵向扩展与横向扩展的融合,以及低延迟通信架构的创新。这些见解强调了硬件与模型协同设计在满足AI工作负载日益增长需求中的关键作用,为下一代AI系统的创新提供了实用蓝图。