Chain-of-Thought (CoT) prompting enhances the reasoning of large language models (LLMs) by decomposing problems into sequential steps, mimicking human logic and reducing errors. However, complex tasks with vast solution spaces and vague constraints often exceed the capacity of a single reasoning chain. Inspired by Minimal Free Resolution (MFR) in commutative algebra and algebraic geometry, we propose Syzygy of Thoughts (SoT)-a novel framework that extends CoT by introducing auxiliary, interrelated reasoning paths. SoT captures deeper logical dependencies, enabling more robust and structured problem-solving. MFR decomposes a module into a sequence of free modules with minimal rank, providing a structured analytical approach to complex systems. This method introduces the concepts of "Module", "Betti numbers","Freeness", "Mapping", "Exactness" and "Minimality", enabling the systematic decomposition of the original complex problem into logically complete minimal subproblems while preserving key problem features and reducing reasoning length. We tested SoT across diverse datasets (e.g., GSM8K, MATH) and models (e.g., GPT-4o-mini, Qwen2.5), achieving inference accuracy that matches or surpasses mainstream CoTs standards. Additionally, by aligning the sampling process with algebraic constraints, our approach enhances the scalability of inference time in LLMs, ensuring both transparent reasoning and high performance. Our code will be publicly available at https://github.com/dlMARiA/Syzygy-of-thoughts.
翻译:思维链(CoT)提示通过将问题分解为顺序步骤来增强大语言模型(LLM)的推理能力,模拟人类逻辑并减少错误。然而,具有庞大解空间和模糊约束的复杂任务往往超出单一推理链的处理能力。受交换代数和代数几何中极小自由分解(MFR)理论的启发,我们提出思维合冲(SoT)——一种通过引入辅助且相互关联的推理路径来扩展CoT的新框架。SoT能够捕捉更深层次的逻辑依赖关系,实现更鲁棒和结构化的问题求解。MFR将模分解为具有极小秩的自由模序列,为复杂系统提供了结构化分析方法。该方法引入“模”、“贝蒂数”、“自由性”、“映射”、“正合性”与“极小性”等概念,能够将原始复杂问题系统性地分解为逻辑完备的极小子问题,同时保留关键问题特征并缩短推理长度。我们在多样化数据集(如GSM8K、MATH)和模型(如GPT-4o-mini、Qwen2.5)上测试SoT,其推理精度达到或超越了主流CoT标准。此外,通过使采样过程符合代数约束,我们的方法提升了LLM推理时间的可扩展性,同时保证了推理过程的透明性与高性能。代码已公开于https://github.com/dlMARiA/Syzygy-of-thoughts。