Learning logical rules is critical to improving reasoning in KGs. This is due to their ability to provide logical and interpretable explanations when used for predictions, as well as their ability to generalize to other tasks, domains, and data. While recent methods have been proposed to learn logical rules, the majority of these methods are either restricted by their computational complexity and can not handle the large search space of large-scale KGs, or show poor generalization when exposed to data outside the training set. In this paper, we propose an end-to-end neural model for learning compositional logical rules called NCRL. NCRL detects the best compositional structure of a rule body, and breaks it into small compositions in order to infer the rule head. By recurrently merging compositions in the rule body with a recurrent attention unit, NCRL finally predicts a single rule head. Experimental results show that NCRL learns high-quality rules, as well as being generalizable. Specifically, we show that NCRL is scalable, efficient, and yields state-of-the-art results for knowledge graph completion on large-scale KGs. Moreover, we test NCRL for systematic generalization by learning to reason on small-scale observed graphs and evaluating on larger unseen ones.
翻译:学习逻辑规则对于改进 KGs 的推理至关重要。 这是因为它们有能力在预测时提供逻辑和可解释的解释,以及能够将逻辑规则推广到其他任务、领域和数据。 虽然提出了最近的方法来学习逻辑规则,但大多数这些方法要么受到其计算复杂性的限制,无法处理大型KGs的大搜索空间,或者在接触培训数据集之外的数据时显示一般化程度差。 在本文中,我们提出了一个学习构成逻辑规则的端到端神经模型,称为 NCRL。 NCRL 检测规则体的最佳构成结构,将其分成小部分,以便推断规则头。通过经常将规则机构的组成与经常性的关注单位合并,NCRL 最终预测出一个单一的规则头。 实验结果显示NCRL 学习高质量规则,并且可以概括化。 具体地说,我们显示NCRL 是可扩缩的、高效的,并得出在大规模CRGs 上进行大规模、 系统、 和 大规模的SICL 水平上系统化的NCRL 测试。</s>