Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with a size of over 100 billion parameters. In this paper, we explore the transfer of such reasoning capabilities to models with less than 100 billion parameters via knowledge distillation. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11% to 21.99% when finetuned on PaLM-540B generated chains of thought.
翻译:激发思考链的思维链成功地提高了大型语言模型的推理能力,在一系列数据集上取得了最新的结果。 但是,这些推理能力似乎只是出现在规模超过1000亿参数的模型中。 在本文中,我们探索通过知识蒸馏将这种推理能力转移到参数低于1000亿参数的模型中。具体地说,我们微调一个学生模型在由更大的教师模型产生的思维产出链中。我们的实验表明,拟议的方法改善了计算、公元和象征性推理数据集的任务性能。例如,GSM8K上的T5 XXL的精确度在微调PALM-540B产生思维链时从8.11%提高到21.99%。