Large Language Models (LLMs), despite their advanced linguistic capabilities, fundamentally lack an intuitive understanding of physical dynamics, which limits their effectiveness in real-world scenarios that require causal reasoning. In this paper, we introduce Causal World Model Induction (CWMI), a novel framework designed to embed an explicit model of causal physics within an LLM. Our approach incorporates a dedicated Causal Physics Module (CPM) and a new training objective called Causal Intervention Loss, encouraging the model to learn cause-and-effect relationships from multimodal data. By training the model to predict the outcomes of hypothetical interventions instead of merely capturing statistical correlations, CWMI develops a robust internal representation of physical laws. Experimental results show that CWMI significantly outperforms state-of-the-art LLMs on zero-shot physical reasoning tasks, including the PIQA benchmark and our newly proposed PhysiCa-Bench dataset. These findings demonstrate that inducing a causal world model is a critical step toward more reliable and generalizable AI systems.
翻译:大型语言模型(LLMs)尽管具备先进的语言能力,但本质上缺乏对物理动态的直观理解,这限制了其在需要因果推理的现实场景中的有效性。本文提出了因果世界模型诱导(CWMI)这一新颖框架,旨在将显式的因果物理模型嵌入LLM中。我们的方法整合了一个专用的因果物理模块(CPM)以及一种名为因果干预损失的新训练目标,促使模型从多模态数据中学习因果关系。通过训练模型预测假设干预的结果,而非仅仅捕捉统计相关性,CWMI构建了对物理定律的鲁棒内部表征。实验结果表明,在零样本物理推理任务(包括PIQA基准测试和我们新提出的PhysiCa-Bench数据集)上,CWMI显著优于当前最先进的LLMs。这些发现表明,诱导因果世界模型是构建更可靠、更可泛化人工智能系统的关键一步。