We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences describing intermediate reasoning steps towards a final answer, large language models can generate new reasoning chains and predict answers for new inputs. A central question is which reasoning examples make the most effective prompts. In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning. We show that prompts with higher reasoning complexity, i.e., chains with more reasoning steps, achieve substantially better performance on math word reasoning tasks over strong baselines. We further extend our complexity-based criteria from prompting (selecting inputs) to decoding (selecting outputs), where we sample multiple reasoning chains from the model, then choose the majority of generated answers from complex reasoning chains (over simple chains). When used to prompt GPT-3, our approach substantially improves multi-step reasoning accuracy, with an 8.6% absolute improvement on GSM8K, and 6.4% on MathQA. Compared with existing example selection schemes like manual tuning or retrieval-based selection, selection based on reasoning complexity is intuitive, easy to implement, and annotation-efficient. Further results demonstrate the robustness of our methods under format perturbation and distribution shift.
翻译:我们研究如何推动大型语言模型,以完成多步推理。现有工作显示,在以一系列思维(CoT)推动的推动下,描述最终答案的中间推理步骤的短句短句序列中,大型语言模型可以产生新的推理链和预测新投入的答案。一个中心问题是,哪些推理实例能产生最有效的提示。在这项工作中,我们提出了基于复杂推理的快速推理,一个简单而有效的多步推理选择方案。我们显示,推理复杂程度较高的推理,即链带更多推理步骤,在数学词推理任务方面实现的显著改进。我们进一步扩展基于复杂程度的标准,从提示(选择投入)到解码(选择产出),到解码(选择产出),我们从模型中抽取多种推理链,然后从产生的答案中选择大多数(超越简单链 ) 。我们的方法大大改进了多步推理的准确性,即:在GSM8K和数学QA上6.4 %的绝对改进。与现有的示例选择计划相比,例如手动调整或重新定位,根据精确度选择,根据复杂度选择方式,进一步选择,根据复杂的推理法,根据复杂的推理法,进一步推理方法展示,根据。根据简单推理法进行。