For many reasoning-heavy tasks involving raw inputs, it is challenging to design an appropriate end-to-end learning pipeline. Neuro-Symbolic Learning, divide the process into sub-symbolic perception and symbolic reasoning, trying to utilise data-driven machine learning and knowledge-driven reasoning simultaneously. However, they suffer from the exponential computational complexity within the interface between these two components, where the sub-symbolic learning model lacks direct supervision, and the symbolic model lacks accurate input facts. Hence, most of them assume the existence of a strong symbolic knowledge base and only learn the perception model while avoiding a crucial problem: where does the knowledge come from? In this paper, we present Abductive Meta-Interpretive Learning ($Meta_{Abd}$) that unites abduction and induction to learn neural networks and induce logic theories jointly from raw data. Experimental results demonstrate that $Meta_{Abd}$ not only outperforms the compared systems in predictive accuracy and data efficiency but also induces logic programs that can be re-used as background knowledge in subsequent learning tasks. To the best of our knowledge, $Meta_{Abd}$ is the first system that can jointly learn neural networks from scratch and induce recursive first-order logic theories with predicate invention.
翻译:对于涉及原始投入的许多超重推理任务来说,设计一个适当的端到端学习管道是具有挑战性的。 Neuro-Symblic Learning(Neuro-Symblic Learning),将这一过程分为次正反观和象征性推理,同时试图利用数据驱动机学习和知识驱动推理;然而,这两个组成部分的界面中,亚正反学习模式缺乏直接监督,象征性模型缺乏准确的输入事实,因此它们都面临着指数性计算复杂性。因此,它们大多假定存在一个强大的象征性知识库,只学习认知模型,同时避免一个关键问题:知识来自何方?在本文件中,我们展示了将绑架和诱导学习神经网络以及从原始数据中联合引导逻辑理论的虚拟理论(Meta ⁇ Abd}) 。 实验结果显示,在预测准确性和数据效率方面,不仅超越了比较系统,而且还引出逻辑程序,可以在随后的学习任务中重新用作背景知识的逻辑性模型 。对于我们知识中最先进的先导和先导先导的逻辑系统来说, 。