We propose an efficient interpretable neuro-symbolic model to solve Inductive Logic Programming (ILP) problems. In this model, which is built from a set of meta-rules organised in a hierarchical structure, first-order rules are invented by learning embeddings to match facts and body predicates of a meta-rule. To instantiate it, we specifically design an expressive set of generic meta-rules, and demonstrate they generate a consequent fragment of Horn clauses. During training, we inject a controlled \pw{Gumbel} noise to avoid local optima and employ interpretability-regularization term to further guide the convergence to interpretable rules. We empirically validate our model on various tasks (ILP, visual genome, reinforcement learning) against several state-of-the-art methods.
翻译:我们建议了一个高效的可解释神经 -- -- 共振模型,以解决感性逻辑编程问题。在这个模型中,我们用一套按等级结构排列的元规则构建了一套元规则,通过学习嵌入来匹配事实和元规则的人体前提而发明了第一级规则。为了即时应用它,我们专门设计了一套直观的通用元规则,并证明它们产生了非洲之角条款的碎片。在培训期间,我们注入了一种受控的\pw{Gumbel}噪音,以避免本地的节选,并使用可解释性常规术语,以进一步指导与可解释规则的趋同。我们用经验验证了我们各种任务(ILP、视觉基因组、强化学习)的模型,以对抗几种最先进的方法。