Large language models (LLMs) have a substantial capacity for high-level analogical reasoning: reproducing patterns in linear text that occur in their training data (zero-shot evaluation) or in the provided context (few-shot in-context learning). However, recent studies show that even the largest LLMs fail in scenarios that require reasoning over multiple objects or facts or making sequences of logical deductions. We propose a two-stage probabilistic inference paradigm, ThinkSum, that reasons over sets of objects or facts in a structured manner. In the first stage (Think -- 'fast' retrieval of associations), a LLM is queried in parallel over a set of phrases extracted from the prompt or an auxiliary model call. In the second stage (Sum -- 'slow' probabilistic inference or reasoning), the results of these queries are aggregated to make the final prediction. We demonstrate the advantages of ThinkSum on the BIG-bench suite of evaluation tasks, achieving improvements over the state of the art using GPT-family models on ten difficult tasks, often with far smaller model variants. We compare and contrast ThinkSum with other proposed modifications to direct prompting of LLMs, such as variants of chain-of-thought prompting. We argue that because the probabilistic inference in ThinkSum is performed outside of calls to the LLM, ThinkSum is less sensitive to prompt design, yields more interpretable predictions, and can be flexibly combined with latent variable models to extract structured knowledge from LLMs.
翻译:大型语言模型(LLMS)具有很强的高级模拟推理能力:在第一阶段(Think-`快速'检索协会的敏感模型),在从快速或辅助模式呼吁中提取的一系列结构化短语中平行地询问LMM。然而,最近的研究表明,即使是最大的LMS也未能在需要推理多个对象或事实或进行逻辑推理顺序的假设中成功。我们提出了一个两个阶段的概率推论模式,即SindhSum,即关于一系列物体或事实的理由,有条不紊。在第一阶段(Think-`快速'检索协会),一个LM是平行的,与一组从即时或辅助模型呼吁中提取的短语。在第二阶段(Sum-slow' 概率推理或推理),这些查询的结果被汇总到最后的预测。我们展示了“思考Sum”在BIG-bench 任务组合中的好处,在10项任务上利用GPT-家庭模型改进艺术状态,通常使用更小的模型变量。我们比较和比较了一组精度的精度的精度分析,因为S的精度的精度的精度,我们把精度的精度比的精度比的精度变的精度比的精度,我们思考的精度的精度的精度的精度是深度的精度的精度的精度是思考的精度,我们的精度的精度的精度的精度的精度的精度的精度的精度的精度,在思考的精度的精度的精度的精度在思考的精度是更细的精度的精度的精度,我们的思考的精度在思考的精度的精度的精度在思考的精度的精度的精度的精度在思考的精度在思考的精度在思考的精度的精度的精度和精确的精度上,我们的研解到精确的精度的精度在思考的精度在思考和精确的精度在思考的精度上,在思考的精度在思考的精度上,在思考的精度在思考的精度在思考的精度上,我们的精度上,我们的精度在思考的精度的精度的