Large language models (LLMs) such as ChatGPT have recently demonstrated significant potential in mathematical abilities, providing valuable reasoning paradigm consistent with human natural language. However, LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities due to incompatibility of the underlying information flow among them, making it challenging to accomplish tasks autonomously. On the other hand, abductive learning (ABL) frameworks for integrating the two abilities of perception and reasoning has seen significant success in inverse decipherment of incomplete facts, but it is limited by the lack of semantic understanding of logical reasoning rules and the dependence on complicated domain knowledge representation. This paper presents a novel method (ChatABL) for integrating LLMs into the ABL framework, aiming at unifying the three abilities in a more user-friendly and understandable manner. The proposed method uses the strengths of LLMs' understanding and logical reasoning to correct the incomplete logical facts for optimizing the performance of perceptual module, by summarizing and reorganizing reasoning rules represented in natural language format. Similarly, perceptual module provides necessary reasoning examples for LLMs in natural language format. The variable-length handwritten equation deciphering task, an abstract expression of the Mayan calendar decoding, is used as a testbed to demonstrate that ChatABL has reasoning ability beyond most existing state-of-the-art methods, which has been well supported by comparative studies. To our best knowledge, the proposed ChatABL is the first attempt to explore a new pattern for further approaching human-level cognitive ability via natural language interaction with ChatGPT.
翻译:大型语言模型(LLMs)例如ChatGPT在数学能力方面表现出了极大的潜力,并提供了一种与人类自然语言一致的有价值的推理范式。然而,由于它们之间基础信息流的不兼容性,LLMs目前在桥接感知、语言理解和推理能力方面存在困难,使得自主完成任务变得具有挑战性。另一方面,推导学习(ABL)的框架在不完整事实的反向解密方面取得了显着的成功,但是受到逻辑推理规则的语义理解不足和依赖于复杂的领域知识表示的限制。本文提出了一种新方法(ChatABL)将LLMs集成到ABL框架中,旨在以更加用户友好和易于理解的方式统一这三种能力。所提出的方法利用LLMs的理解和逻辑推理强项来更正不完整的逻辑事实,从而优化知觉模块的性能,通过总结和重组以自然语言格式表示的推理规则。相似地,知觉模块为LLMs提供必要的自然语言格式的推理示例。用于解密变长手写方程式的任务,这是玛雅日历解码的一个抽象表达,被用作测试基点,证明ChatABL具有超越大多数现有最先进方法的推理能力,这已得到了比较研究的充分支持。据我们所知,所提出的ChatABL是探索通过与ChatGPT的自然语言交互进一步接近人类水平认知能力的新模式的第一次尝试。