Reasoning is a fundamental aspect of human intelligence that plays a crucial role in activities such as problem solving, decision making, and critical thinking. In recent years, large language models (LLMs) have made significant progress in natural language processing, and there is observation that these models may exhibit reasoning abilities when they are sufficiently large. However, it is not yet clear to what extent LLMs are capable of reasoning. This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs, including techniques for improving and eliciting reasoning in these models, methods and benchmarks for evaluating reasoning abilities, findings and implications of previous research in this field, and suggestions on future directions. Our aim is to provide a detailed and up-to-date review of this topic and stimulate meaningful discussion and future work.
翻译:理论是人类智慧的一个基本方面,在解决问题、决策和批判性思维等活动中发挥着关键作用。近年来,大型语言模型(LLMs)在自然语言处理方面取得了显著进展,据观察,这些模型在足够大的情况下可能表现出推理能力。然而,LLMs能够推理的程度尚不清楚。本文件全面概述了目前对LLMs推理的了解状况,包括改进和引证这些模型的推理技巧、方法和基准,用以评价该领域以往研究的推理能力、结论和所涉问题,以及关于未来方向的建议。我们的目的是对这一专题进行详细和最新的审查,促进有意义的讨论和今后的工作。