Existing pre-trained models are generally geared towards a particular class of problems. To date, there seems to be still no consensus on what the right architecture and pre-training setup should be. This paper presents a unified framework for pre-training models that are universally effective across datasets and setups. We begin by disentangling architectural archetypes with pre-training objectives -- two concepts that are commonly conflated. Next, we present a generalized and unified perspective for self-supervision in NLP and show how different pre-training objectives can be cast as one another and how interpolating between different objectives can be effective. We then propose Mixture-of-Denoisers (MoD), a pre-training objective that combines diverse pre-training paradigms together. We furthermore introduce a notion of mode switching, wherein downstream fine-tuning is associated with specific pre-training schemes. We conduct extensive ablative experiments to compare multiple pre-training objectives and find that our method pushes the Pareto-frontier by outperforming T5 and/or GPT-like models across multiple diverse setups. Finally, by scaling our model up to 20B parameters, we achieve SOTA performance on 50 well-established supervised NLP tasks ranging from language generation (with automated and human evaluation), language understanding, text classification, question answering, commonsense reasoning, long text reasoning, structured knowledge grounding and information retrieval. Our model also achieve strong results at in-context learning, outperforming 175B GPT-3 on zero-shot SuperGLUE and tripling the performance of T5-XXL on one-shot summarization. We release Flax-based T5X model checkpoints for the 20B model at \url{https://github.com/google-research/google-research/tree/master/ul2}.
翻译:现有的培训前模式一般都针对某类问题。 到目前为止, 似乎还没有就正确的架构和训练前设置应该什么是合适的结构或训练前设置达成共识。 本文为培训前模式提供了一个统一框架, 培训前模式普遍有效, 跨数据集和设置。 我们首先将建筑的原型与培训前目标拆散, 两个概念通常是混为一团的。 接下来, 我们为NLP的自我监督展示一个普遍和统一的观点, 并展示如何将不同的培训前目标作为另一个目标, 以及不同目标之间的相互调和如何有效。 我们然后提议将Denoiser(MoD) 混成一个统一的培训前模式, 将各种不同的培训前模式结合起来。 我们进行广泛的研究实验, 比较多项培训前目标, 并发现我们的方法会将Pareto的进度推向更远端, 以及不同目标之间的调试调。 我们随后提出了Mix- Denoiser(M), 将不同的培训前模式合并成一个培训前模式, 将O- G- G- droupal Studal exal exalal exalal exalizalizalizalizalization 20B ex, lax, lax lax lax lax lax lax lax lax lax lax lax 20 lax lax