Large Language Models (LLMs) have achieved strong performance across a wide range of natural language processing tasks in recent years, including machine translation, text generation, and question answering. As their applications extend to increasingly complex scenarios, however, LLMs continue to face challenges in tasks that require deep reasoning and logical inference. In particular, models trained on large scale textual corpora may incorporate noisy or irrelevant information during generation, which can lead to incorrect predictions or outputs that are inconsistent with factual knowledge. To address this limitation, we propose a stepwise reasoning enhancement framework for LLMs based on external subgraph generation, termed SGR. The proposed framework dynamically constructs query relevant subgraphs from external knowledge bases and leverages their semantic structure to guide the reasoning process. By performing reasoning in a step by step manner over structured subgraphs, SGR reduces the influence of noisy information and improves reasoning accuracy. Specifically, the framework first generates an external subgraph tailored to the input query, then guides the model to conduct multi step reasoning grounded in the subgraph, and finally integrates multiple reasoning paths to produce the final answer. Experimental results on multiple benchmark datasets demonstrate that SGR consistently outperforms strong baselines, indicating its effectiveness in enhancing the reasoning capabilities of LLMs.
翻译:近年来,大语言模型(LLMs)在机器翻译、文本生成和问答等广泛自然语言处理任务中展现出强大性能。然而,随着其应用扩展到日益复杂的场景,LLMs在需要深度推理与逻辑推断的任务中仍面临挑战。具体而言,在大规模文本语料上训练的模型可能在生成过程中融入噪声或无关信息,导致预测错误或输出与事实知识不一致。为应对这一局限,我们提出一种基于外部子图生成的大语言模型逐步推理增强框架,称为SGR。该框架从外部知识库动态构建与查询相关的子图,并利用其语义结构指导推理过程。通过在结构化子图上执行逐步推理,SGR减少了噪声信息的影响并提升了推理准确性。具体而言,该框架首先生成与输入查询适配的外部子图,随后引导模型基于子图进行多步推理,最终整合多条推理路径以生成最终答案。在多个基准数据集上的实验结果表明,SGR持续优于强基线模型,证实了其在增强大语言模型推理能力方面的有效性。