Language models (LMs) have demonstrated remarkable performance on downstream tasks, using in-context exemplars or human instructions. Recent works have shown that chain-of-thought (CoT) prompting can elicit models to solve complex reasoning tasks, step-by-step. However, the efficacy of prompt-based CoT methods is restricted to very large LMs such as GPT-3 (175B), thus limiting deployability. In this paper, we revisit the fine-tuning approach to enable complex reasoning in smaller LMs, optimized to efficiently perform a specific task. We propose Fine-tune-CoT, a method that leverages the capabilities of very large LMs to generate reasoning samples and teach smaller models via fine-tuning. We evaluate our method on publicly available LMs across a wide range of complex tasks and model sizes. We find that Fine-tune-CoT enables substantial reasoning capability in small models, whereas previous prompt-based baselines exhibit near-random performance. Student models can even outperform the teacher in some tasks while reducing model size requirements by several orders of magnitude. We conduct extensive ablations and sample studies to understand the reasoning capabilities of student models. We also identify several important nuances that have been overlooked in concurrent fine-tuning works on CoT and address them in our analysis.
翻译:语言模型(LMS)在下游任务上表现出了显著的成绩,使用了文体外表或人的指示。最近的工作表明,启发思考链(CoT)能够逐步地产生解决复杂推理任务的模型。然而,基于即时的COT方法的功效仅限于诸如GPT-3(175B)等非常大的LMS(LMS),从而限制了可部署性。在本文件中,我们重新审视微调方法,以便能够在较小的LMS中进行复杂的推理,优化以高效地完成具体任务。我们提议了“Fine-tune-CoT”这一方法,利用非常大型LMS的能力生成推理样本,并通过微调教授较小的模型。我们评估了我们关于各种复杂任务和模型大小的公开使用LMS的方法。我们发现,Fine-tun-CoT能够使小型模型具有很强的推理能力,而以前基于即时的基线则展示近于随机性的表现。学生模型甚至可以在一些任务中超越教师的能力,同时减少一些规模的要求。我们还进行了广泛的微和抽样研究,以便了解学生模型的推理判能力。我们还查明这些模型。