We present an approach for systematic reasoning that produces human interpretable proof trees grounded in a factbase. Our solution evokes classic Prolog-based inference engines, where we replace handcrafted rules through a combination of neural language modeling, guided generation, and semiparametric dense retrieval. This novel reasoning engine, NELLIE, dynamically instantiates interpretable inference rules that capture and score entailment (de)compositions over natural language statements. NELLIE shows competitive performance on scientific QA datasets requiring structured explanations over multiple facts while fully grounding justification proofs in verified knowledge.
翻译:我们提出了一个系统推理方法,产生基于事实基础的人类可解释的证明树。 我们的解决方案引用了经典的基于Prolog的推断引擎,我们通过神经语言模型、制导生成和半对称密度检索的组合来取代手工艺规则。 这个新颖的推理引擎NELLIE(NELLIE)动态地即时解释可解释的推理规则对自然语言说明进行记录和分数(de)分解。 NELLIE显示科学QA数据集的竞争性表现,这些数据集需要针对多种事实进行有条理的解释,同时以经核实的知识为充分的理由证明理由。