Pretrained large language models (LLMs) are widely used in many sub-fields of natural language processing (NLP) and generally known as excellent few-shot learners with task-specific exemplars. Notably, chain of thought (CoT) prompting, a recent technique for eliciting complex multi-step reasoning through step-by-step answer examples, achieved the state-of-the-art performances in arithmetics and symbolic reasoning, difficult system-2 tasks that do not follow the standard scaling laws for LLMs. While these successes are often attributed to LLMs' ability for few-shot learning, we show that LLMs are decent zero-shot reasoners by simply adding "Let's think step by step" before each answer. Experimental results demonstrate that our Zero-shot-CoT, using the same single prompt template, significantly outperforms zero-shot LLM performances on diverse benchmark reasoning tasks including arithmetics (MultiArith, GSM8K, AQUA-RAT, SVAMP), symbolic reasoning (Last Letter, Coin Flip), and other logical reasoning tasks (Date Understanding, Tracking Shuffled Objects), without any hand-crafted few-shot examples, e.g. increasing the accuracy on MultiArith from 17.7% to 78.7% and GSM8K from 10.4% to 40.7% with large InstructGPT model (text-davinci-002), as well as similar magnitudes of improvements with another off-the-shelf large model, 540B parameter PaLM. The versatility of this single prompt across very diverse reasoning tasks hints at untapped and understudied fundamental zero-shot capabilities of LLMs, suggesting high-level, multi-task broad cognitive capabilities may be extracted by simple prompting. We hope our work not only serves as the minimal strongest zero-shot baseline for the challenging reasoning benchmarks, but also highlights the importance of carefully exploring and analyzing the enormous zero-shot knowledge hidden inside LLMs before crafting finetuning datasets or few-shot exemplars.
翻译:精练的大型语言模型(LLMS)被广泛用于许多自然语言处理(NLP)的子领域。 这些成功通常归功于LLMS的改进能力,但我们显示LLMS是优秀的零发解答者,只要在每个答案之前加上“让我们一步一步思考”即可。实验结果显示,我们Zeroshot-CoT,使用同样的单一快速模板,大大优于数学和象征性推理中的最新零发声表现,系统2的任务不遵循LLMS的标准缩略分法。虽然这些成功通常归功于LLMs的精益改进能力,以微小的学习,但我们显示LLMSM是优秀的零发解解答者。 实验结果显示,我们的Zerosh-Shot-LMT, 使用同样的单一快速模板,大大超前的零发LMMSM, 包括计算(MultiArith, GSM8K, AQUA-RAT, SVAMPP) 的精度推推论(LT),不甚小的缩推理推理推理推理(LVILDMD),不甚高的精准推理(LVI-LVI-LM),不甚高的推理),不及高的直判。