Large-scale knowledge graphs (KGs) provide structured representations of human knowledge. However, as it is impossible to contain all knowledge, KGs are usually incomplete. Reasoning based on existing facts paves a way to discover missing facts. In this paper, we study the problem of learning logic rules for reasoning on knowledge graphs for completing missing factual triplets. Learning logic rules equips a model with strong interpretability as well as the ability to generalize to similar tasks. We propose a model called MPLR that improves the existing models to fully use training data and multi-target scenarios are considered. In addition, considering the deficiency in evaluating the performance of models and the quality of mined rules, we further propose two novel indicators to help with the problem. Experimental results empirically demonstrate that our MPLR model outperforms state-of-the-art methods on five benchmark datasets. The results also prove the effectiveness of the indicators.
翻译:大型知识图表(KGs)提供了人类知识的结构化表达方式。然而,由于不可能包含所有知识,KGs通常是不完整的。基于现有事实的根据为发现缺失的事实铺平了一条路。在本文中,我们研究了学习关于知识图表推理的逻辑规则以完成缺失的事实三重图的问题。学习逻辑规则为模型提供了强有力的解释性以及推广类似任务的能力。我们提出了一个称为MPLR的模式,改进现有模型,以充分利用培训数据和多目标设想方案。此外,考虑到在评价模型性能和布雷规则质量方面存在的缺陷,我们进一步提出了两个新的指标来帮助解决问题。实验结果从经验上表明,我们的MPLR模型在五个基准数据集上超越了最新的方法。结果也证明了指标的有效性。