As large language models scale, distributed training systems increasingly rely on thousands of GPUs running for days or weeks. Fault tolerance is essential and periodic model checkpointing is the standard for achieving it. However, the already popular class of sparsely activated Mixture-of-Experts (MoE) models poses unique challenges. While their computational demands are similar to dense models, their larger size necessitates bigger checkpoints that cannot fully overlap with training iterations, causing throughput degradation or reduced checkpoint frequency. We present MoEtion, a distributed in-memory checkpointing system designed for efficient and reliable training of large MoE models with near-zero overhead. MoEtion reduces checkpoint size by up to $9\times$, comparable to dense models, by exploiting the skewness in expert popularity. It dynamically selects the critical subset of experts to snapshot in each checkpointing step. MoEtion increases checkpointing frequency by up to $15\times$ compared to state-of-the-art, in memory checkpointing systems like Gemini. The reduced size allows checkpointing on every training iteration and full overlap between checkpointing with training operations. Finally, MoEtion preserves model convergence properties. After faults, MoEtion adjusts expert capacities to ensure consistent token processing without degrading accuracy. Experiments on MoE-GPT models with 8 to 64 experts show that MoEtion reduces checkpointing overheads by up to $12\times$, while maintaining model accuracy and fault tolerance. These results underscore MoEtion's ability to improve training efficiency and reliability.
翻译:暂无翻译