Large language models (LLMs) have shown remarkable reasoning capabilities given chain-of-thought prompts (examples with intermediate reasoning steps). Existing benchmarks measure reasoning ability indirectly, by evaluating accuracy on downstream tasks such as mathematical reasoning. However, it is unclear how these models obtain the answers and whether they rely on simple heuristics rather than the generated chain-of-thought. To enable systematic exploration of the reasoning ability of LLMs, we present a new synthetic question-answering dataset called PrOntoQA, where each example is generated from a synthetic world model represented in first-order logic. This allows us to parse the generated chain-of-thought into symbolic proofs for formal analysis. Our analysis on InstructGPT and GPT-3 shows that LLMs are quite capable of making correct individual deduction steps, and so are generally capable of reasoning, even in fictional contexts. However, they have difficulty with proof planning: When multiple valid deduction steps are available, they are not able to systematically explore the different options.
翻译:大型语言模型(LLMs)显示了惊人的推理能力,因为有一系列思考的提示(例如中间推理步骤)。现有的基准测量推理能力,通过评估数学推理等下游任务的准确性,间接地衡量推理能力。然而,不清楚这些模型是如何获得答案的,以及这些模型是否依赖简单的推理学而不是生成的思维链。为了能够系统地探索LLMs的推理能力,我们提出了一个新的合成解答数据集,称为PrOntoQA,其中每个例子都来自一级逻辑中代表的合成世界模型。这使我们能够将生成的推理链分析成正式分析的符号性证据。我们对PT和GPT-3指令的分析表明,LMs非常有能力做出正确的个别推理步骤,因此一般地具有推理能力,即使是在虚构的背景下也是如此。然而,它们很难进行证据规划:当存在多个有效的推理步骤时,它们无法系统地探索不同的选择。</s>