The surprising ability of Large Language Models (LLMs) to perform well on complex reasoning with only few-shot chain-of-thought prompts is believed to emerge only in very large-scale models (100+ billion parameters). We show that such abilities can, in fact, be distilled down from GPT-3.5 ($\ge$ 175B) to T5 variants ($\le$ 11B). We propose model specialization, to specialize the model's ability towards a target task. The hypothesis is that large models (commonly viewed as larger than 100B) have strong modeling power, but are spread on a large spectrum of tasks. Small models (commonly viewed as smaller than 10B) have limited model capacity, but if we concentrate their capacity on a specific target task, the model can achieve a decent improved performance. We use multi-step math reasoning as our testbed because it is a very typical emergent ability. We show two important aspects of model abilities: (1). there exists a very complex balance/ tradeoff between language models' multi-dimensional abilities; (2). by paying the price of decreased generic ability, we can clearly lift up the scaling curve of models smaller than 10B towards a specialized multi-step math reasoning ability. We further give comprehensive discussions about important design choices for better generalization, including the tuning data format, the start model checkpoint, and a new model selection method. We hope our practice and discoveries can serve as an important attempt towards specialized smaller models in the new research paradigm set by LLMs.
翻译:大型语言模型(LLMS)在复杂的推理上表现良好,只有少见的一连串想法的推理才有惊人的惊人能力,据认为,只有在非常大规模的模型(100+10亿参数)中才会出现这种能力。 我们表明,这种能力实际上可以从GPT-3-5($175B)到T5变型($11B)中蒸发。我们建议了模型专业化,将模型的能力专门化为一项目标任务。假设是,大型模型(通常被认为大于100B)具有很强的建模能力,但分布在一大系列任务上。小型模型(通常被认为小于10B)的模型,其模型能力有限,但模型能力有限,但是如果我们将其能力集中在具体的目标任务上,模型就能取得体面的改进性业绩。我们用多步数数学推法作为测试基础,因为它是典型的涌现能力。我们展示了模型能力的两个重要方面:(1) 语言模型的多维能力存在非常复杂的平衡/权衡;(2) 通过支付降低的通用能力价格,我们可以明确地将新的模型推高的模型推向一个重要的基础,我们可以推导出一个更精确的数学模型的模型的推向一个比10B。