Mixture of Experts layers (MoEs) enable efficient scaling of language models through conditional computation. This paper presents a detailed empirical study of how autoregressive MoE language models scale in comparison with dense models in a wide range of settings: in- and out-of-domain language modeling, zero- and few-shot priming, and full-shot fine-tuning. With the exception of fine-tuning, we find MoEs to be substantially more compute efficient. At more modest training budgets, MoEs can match the performance of dense models using $\sim$4 times less compute. This gap narrows at scale, but our largest MoE model (1.1T parameters) consistently outperforms a compute-equivalent dense model (6.7B parameters). Overall, this performance gap varies greatly across tasks and domains, suggesting that MoE and dense models generalize differently in ways that are worthy of future study. We make our code and models publicly available for research use.
翻译:专家层混合(MoEs) 能够通过有条件的计算来有效缩放语言模型。 本文详细介绍了一项经验研究,研究如何在广泛的环境中与密度模型相比,自动递减的MOE语言模型规模与密集模型相比: 内外语言模型、 零和微粒边际模型和全速微调。 除微调外, 我们发现教育部和密集模型的计算效率要高得多。 在较少量的培训预算下, 教育部可以比密集模型的性能更低4倍的计算。 这个差距在规模上缩小,但我们最大的教育部模型(1.1T参数)始终比一个计算等值的密集模型(6. 7B参数)要差很多, 总体而言, 这一绩效差距在任务和领域之间有很大差异, 表明教育部和密集模型在值得未来研究的方式上是不同的, 。 我们公布我们的代码和模型供研究使用。