Learning first-order logic programs (LPs) from relational facts which yields intuitive insights into the data is a challenging topic in neuro-symbolic research. We introduce a novel differentiable inductive logic programming (ILP) model, called differentiable first-order rule learner (DFOL), which finds the correct LPs from relational facts by searching for the interpretable matrix representations of LPs. These interpretable matrices are deemed as trainable tensors in neural networks (NNs). The NNs are devised according to the differentiable semantics of LPs. Specifically, we first adopt a novel propositionalization method that transfers facts to NN-readable vector pairs representing interpretation pairs. We replace the immediate consequence operator with NN constraint functions consisting of algebraic operations and a sigmoid-like activation function. We map the symbolic forward-chained format of LPs into NN constraint functions consisting of operations between subsymbolic vector representations of atoms. By applying gradient descent, the trained well parameters of NNs can be decoded into precise symbolic LPs in forward-chained logic format. We demonstrate that DFOL can perform on several standard ILP datasets, knowledge bases, and probabilistic relation facts and outperform several well-known differentiable ILP models. Experimental results indicate that DFOL is a precise, robust, scalable, and computationally cheap differentiable ILP model.
翻译:从关系事实中学习一阶逻辑程序(LPs),从关系事实中学习一阶逻辑程序(LPs),从而得出对数据的直观洞察力,这是神经-感官研究中的一个具有挑战性的主题。我们引入了一种新颖的、可区别的感应逻辑程序(ILP)模型,称为“可区分的一阶规则学习者”(DFOL),它通过搜索LPs可解释的矩阵表解,从关系事实中找到正确的LPs。这些可解释矩阵被视为神经网络(NNS)中可训练的导体。这些可解释矩阵是根据LPs的可调解析性词设计的。具体地说,我们首先采用了一种新颖的理论化方法,将事实转移到代表解释对配方的非可读矢量矢量的矢量组合(ILP)模型中。我们用NNS约束性功能从关系中找到正确的LPs。我们绘制了隐性前链式的IP格式,通过应用梯度的梯度下降的参数,可以解析地显示精确的LPsallial-lexal-lexal-deal-laxal-deal-deal-labislational slax smax smax 。