The Mixture-of-Experts (MoE) technique can scale up the model size of Transformers with an affordable computational overhead. We point out that existing learning-to-route MoE methods suffer from the routing fluctuation issue, i.e., the target expert of the same input may change along with training, but only one expert will be activated for the input during inference. The routing fluctuation tends to harm sample efficiency because the same input updates different experts but only one is finally used. In this paper, we propose StableMoE with two training stages to address the routing fluctuation problem. In the first training stage, we learn a balanced and cohesive routing strategy and distill it into a lightweight router decoupled from the backbone model. In the second training stage, we utilize the distilled router to determine the token-to-expert assignment and freeze it for a stable routing strategy. We validate our method on language modeling and multilingual machine translation. The results show that StableMoE outperforms existing MoE methods in terms of both convergence speed and performance.
翻译:专家混合(MOE)技术可以扩大变异器的模型规模,使其具有负担得起的计算管理费用。我们指出,现有的学习到路由的MOE方法会因路线波动问题而受到影响,即同一投入的目标专家可能随培训而变化,但在推断过程中,只有一位专家才能被激活输入。路由波动会损害样本效率,因为同样的输入会更新不同的专家,但最后只使用一个。在本文中,我们建议采用两个培训阶段的SableMoE, 以解决路由波动问题。在第一个培训阶段,我们学习一种平衡和连贯的路线战略,并将它提炼成一个与主干模型脱钩的轻质路由器。在第二个培训阶段,我们利用蒸馏路由器确定代号到专家的指派,并冻结它,以形成稳定的路线战略。我们验证了语言建模和多语种机器翻译的方法。结果显示StableMoE在趋同速度和性能两方面都超越了现有的MOE方法。