Large Language Models (LLMs) excel at understanding natural language but struggle with explicit commonsense reasoning. A recent trend of research suggests that the combination of LLM with robust symbolic reasoning systems can overcome this problem on story-based question answering tasks. In this setting, existing approaches typically depend on human expertise to manually craft the symbolic component. We argue, however, that this component can also be automatically learned from examples. In this work, we introduce LLM2LAS, a hybrid system that effectively combines the natural language understanding capabilities of LLMs, the rule induction power of the Learning from Answer Sets (LAS) system ILASP, and the formal reasoning strengths of Answer Set Programming (ASP). LLMs are used to extract semantic structures from text, which ILASP then transforms into interpretable logic rules. These rules allow an ASP solver to perform precise and consistent reasoning, enabling correct answers to previously unseen questions. Empirical results outline the strengths and weaknesses of our automatic approach for learning and reasoning in a story-based question answering benchmark.
翻译:大语言模型(LLMs)在自然语言理解方面表现出色,但在显式常识推理方面仍存在困难。近期研究趋势表明,将大语言模型与鲁棒的符号推理系统相结合,能够在基于故事的问答任务中克服这一问题。在此背景下,现有方法通常依赖人工专家经验手动构建符号推理组件。然而,我们认为该组件同样可以通过示例自动学习获得。本研究提出了LLM2LAS——一种混合系统,它有效结合了大语言模型的自然语言理解能力、答案集学习(LAS)系统ILASP的规则归纳能力,以及答案集编程(ASP)的形式化推理优势。该系统首先利用大语言模型从文本中提取语义结构,再由ILASP将其转化为可解释的逻辑规则。这些规则通过ASP求解器实现精确且一致的推理,从而对未见问题给出正确答案。实证研究结果在基于故事的问答基准测试中,系统性地揭示了我们这种自动化学习与推理方法的优势与局限性。