The Mixture of Experts architecture allows for outrageously large neural networks by scaling model parameter size independently from computational demand (FLOPs). However, current DNN frameworks cannot effectively support the dynamic data flow in Mixture of Experts, and implementations on top of these frameworks need to use workarounds that introduce significant overheads. To address the limitation of these frameworks, we present DynaMoE, a DNN library that uses dynamic recompilations to optimize and adapt the use of computational resources to the dynamic needs of Mixture of Experts models. Our evaluation shows that DynaMoE achieves a 1.8x speedup and supports 2.3x larger model sizes when compared to existing MoE systems, even when not using recompilations. We then present further optimizations enabled by dynamic recompilations that yield an additional 1.7x speedup while simultaneously reducing memory pressure and improving model quality.
翻译:专家混合结构允许使用与计算需求(FLOPs)分开的模型参数大小来扩大巨大的神经网络。 但是,目前的DNN框架无法有效支持专家混合中的动态数据流,而在这些框架之上的落实需要使用引入重大间接费用的变通办法。为了解决这些框架的局限性,我们向DynaMoE展示DynaMoE,这是一个DNNN图书馆,它利用动态重组优化和调整计算资源的使用,使之适应专家混合模型的动态需要。我们的评估表明,DynMoE与现有的MOE系统相比,可以实现1.8x加速,支持2.3x更大的模型规模,即使没有使用重新拼凑。然后,我们提出由动态重新组合所促成的进一步优化,这种组合将产生额外的1.7x速度,同时减少记忆压力,提高模型质量。