In deep learning, models typically reuse the same parameters for all inputs. Mixture of Experts (MoE) defies this and instead selects different parameters for each incoming example. The result is a sparsely-activated model -- with outrageous numbers of parameters -- but a constant computational cost. However, despite several notable successes of MoE, widespread adoption has been hindered by complexity, communication costs and training instability -- we address these with the Switch Transformer. We simplify the MoE routing algorithm and design intuitive improved models with reduced communication and computational costs. Our proposed training techniques help wrangle the instabilities and we show large sparse models may be trained, for the first time, with lower precision (bfloat16) formats. We design models based off T5-Base and T5-Large to obtain up to 7x increases in pre-training speed with the same computational resources. These improvements extend into multilingual settings where we measure gains over the mT5-Base version across all 101 languages. Finally, we advance the current scale of language models by pre-training up to trillion parameter models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the T5-XXL model.
翻译:在深层次的学习中,模型通常会重新使用所有投入的相同参数。 专家混合(MoE)比这更难用, 反而为每个进取的示例选择不同的参数。 结果是一个少许活跃的模型 -- -- 参数数量惊人 -- -- 但计算成本不变。 然而,尽管教育部取得了一些显著的成功, 广泛采用受到复杂程度、 通信成本和培训不稳定的阻碍, 我们用开关变换器解决这些问题。 我们简化了教育部的路径算法和设计改进的、 降低通信和计算成本的改进模型。 我们拟议的培训技术有助于调和不稳定性, 并且我们展示出大量的稀有模型, 第一次可以以更低的精确度( bfloat16) 格式来培训。 我们设计了基于T5- Base和T5-LLegee的模型, 以便用相同的计算资源在培训前速度上达到7x。 这些改进延伸到多种语言环境, 我们测量所有101种语言的 mT5- Bas 版本的收益。 最后, 我们通过在培训前将语言模型提升到万亿参数模型到“ Cloveleled 4- crowed Corus” 和达到4x 速度超过4x 速度。