Language planning aims to implement complex high-level goals by decomposition into sequential simpler low-level steps. Such procedural reasoning ability is essential for applications such as household robots and virtual assistants. Although language planning is a basic skill set for humans in daily life, it remains a challenge for large language models (LLMs) that lack deep-level commonsense knowledge in the real world. Previous methods require either manual exemplars or annotated programs to acquire such ability from LLMs. In contrast, this paper proposes Neuro-Symbolic Causal Language Planner (CLAP) that elicits procedural knowledge from the LLMs with commonsense-infused prompting. Pre-trained knowledge in LLMs is essentially an unobserved confounder that causes spurious correlations between tasks and action plans. Through the lens of a Structural Causal Model (SCM), we propose an effective strategy in CLAP to construct prompts as a causal intervention toward our SCM. Using graph sampling techniques and symbolic program executors, our strategy formalizes the structured causal prompts from commonsense knowledge bases. CLAP obtains state-of-the-art performance on WikiHow and RobotHow, achieving a relative improvement of 5.28% in human evaluations under the counterfactual setting. This indicates the superiority of CLAP in causal language planning semantically and sequentially.
翻译:语言规划的目的是通过分解成顺序更简单的低层次步骤来实现复杂的高层次目标。这种程序推理能力对于家庭机器人和虚拟助理等应用至关重要。虽然语言规划是人类日常生活中的基本技能,但对在现实世界中缺乏深层次常识知识的大型语言模型(LLMs)来说仍然是一项挑战。以前的方法要求人工模版或附加说明的程序,以便从LLMs获得这种能力。与此相反,本文件提议Neuro-Symbolic Causal语言规划师(CLAP)从具有共同思维知识基础的LLMS中获取程序知识。LOMS的预先培训知识基本上是一个没有观测到的共鸣者,导致任务和行动计划之间产生虚假的关联。通过结构卡萨勒模式(SCM)的视角,我们在CLAP中提出了一个有效的战略,以快速的方式向我们SCMM(SC)提供因果干预。使用图形采样技术和象征性程序,我们的战略将结构上的因果提示从普通知识基础中获取的迅速理解。LLOMS知识,CAP的预修饰式知识基本上就是在VAMALA级(S)中进行相对性地评估。