Symbolic reasoning, rule-based symbol manipulation, is a hallmark of human intelligence. However, rule-based systems have had limited success competing with learning-based systems outside formalized domains such as automated theorem proving. We hypothesize that this is due to the manual construction of rules in past attempts. In this work, we ask how we can build a rule-based system that can reason with natural language input but without the manual construction of rules. We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences, and MetaInduce, a learning algorithm that induces MetaQNL rules from training data consisting of questions and answers, with or without intermediate reasoning steps. Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks; it learns compact models with much less data and produces not only answers but also checkable proofs. Further, experiments on a real-world morphological analysis benchmark show that it is possible for our method to handle noise and ambiguity. Code will be released at https://github.com/princeton-vl/MetaQNL.
翻译:符号推理、基于规则的符号操纵,是人类智慧的标志。然而,基于规则的系统与正规领域外的基于学习的系统竞争的成功有限,例如自动理论验证。我们假设这是过去尝试中手工构建规则的结果。在这项工作中,我们询问我们如何能够建立一个基于规则的系统,这种系统既能用自然语言输入,又没有手工构建规则。我们提议MetaQNL,一种能够表达正式逻辑和自然语言句子的“准自然语言”语言和MetaInduce,这是一种学习算法,它使MetaQNL规则从由问答组成的培训数据中产生,无论是否采用中间推理步骤。我们的方法在多个推理基准上达到最新精确度;它用更少的数据学习紧凑模型,不仅产生答案,而且还产生可核实的证据。此外,关于真实世界形态分析基准的实验表明,我们处理噪音和模糊性的方法是可能的。代码将在https://github.com/prince-vl/MetaQL上公布。