Recent advances in reinforcement learning (RL) have substantially improved the training of large-scale language models, leading to significant gains in generation quality and reasoning ability. However, most existing research focuses on dense models, while RL training for Mixture-of-Experts (MoE) architectures remains underexplored. To address the instability commonly observed in MoE training, we propose a novel router-aware approach to optimize importance sampling (IS) weights in off-policy RL. Specifically, we design a rescaling strategy guided by router logits, which effectively reduces gradient variance and mitigates training divergence. Experimental results demonstrate that our method significantly improves both the convergence stability and the final performance of MoE models, highlighting the potential of RL algorithmic innovations tailored to MoE architectures and providing a promising direction for efficient training of large-scale expert models.
翻译:近年来,强化学习(RL)领域的进展显著提升了大语言模型的训练效果,在生成质量与推理能力方面取得了重要突破。然而,现有研究大多集中于稠密模型,针对混合专家(Mixture-of-Experts, MoE)架构的强化学习训练仍待深入探索。为解决MoE训练中普遍存在的不稳定性问题,本文提出一种新颖的路由器感知方法,用于优化离线策略强化学习中的重要性采样(IS)权重。具体而言,我们设计了一种由路由器逻辑值引导的重缩放策略,该策略能有效降低梯度方差并缓解训练发散问题。实验结果表明,所提方法显著提升了MoE模型的收敛稳定性与最终性能,凸显了针对MoE架构定制强化学习算法的潜力,为大规模专家模型的高效训练提供了可行方向。