Despite the remarkable progress in neural models, their ability to generalize, a cornerstone for applications like logical reasoning, remains a critical challenge. We delineate two fundamental aspects of this ability: compositionality, the capacity to abstract atomic logical rules underlying complex inferences, and recursiveness, the aptitude to build intricate representations through iterative application of inference rules. In the literature, these two aspects are often confounded together under the umbrella term of generalization. To sharpen this distinction, we investigated the logical generalization capabilities of pre-trained large language models (LLMs) using the syllogistic fragment as a benchmark for natural language reasoning. Though simple, this fragment provides a foundational yet expressive subset of formal logic that supports controlled evaluation of essential reasoning abilities. Our findings reveal a significant disparity: while LLMs demonstrate reasonable proficiency in recursiveness, they struggle with compositionality. To overcome these limitations and establish a reliable logical prover, we propose a hybrid architecture integrating symbolic reasoning with neural computation. This synergistic interaction enables robust and efficient inference, neural components accelerate processing, while symbolic reasoning ensures completeness. Our experiments show that high efficiency is preserved even with relatively small neural components. As part of our proposed methodology, this analysis gives a rationale and highlights the potential of hybrid models to effectively address key generalization barriers in neural reasoning systems.
翻译:尽管神经模型取得了显著进展,但其泛化能力——作为逻辑推理等应用基石的特性——仍然是一个关键挑战。我们界定了这种能力的两个基本维度:组合性(即从复杂推理中抽象出原子逻辑规则的能力)与递归性(即通过推理规则的迭代应用构建复杂表征的能力)。现有文献常将这两个维度混为一谈,统称为泛化能力。为明确区分二者,我们以三段论逻辑片段为自然语言推理的基准,研究了预训练大语言模型(LLMs)的逻辑泛化能力。该逻辑片段虽然形式简单,却构成了形式逻辑中基础而富有表现力的子集,能够支持对核心推理能力的受控评估。研究发现存在显著差异:LLMs在递归性方面表现出合理熟练度,却在组合性方面存在明显缺陷。为突破这些局限并建立可靠的逻辑证明器,我们提出了一种融合符号推理与神经计算的混合架构。这种协同交互实现了稳健高效的推理:神经组件加速处理过程,而符号推理确保完备性。实验表明,即使采用相对小规模的神经组件,系统仍能保持高效率。作为方法论的一部分,本分析为混合模型有效解决神经推理系统中关键泛化障碍的潜力提供了理论依据与实践指引。