Large-scale knowledge graphs provide structured representations of human knowledge. However, as it is impossible to collect all knowledge, knowledge graphs are usually incomplete. Reasoning based on existing facts paves a way to discover missing facts. In this paper, we study the problem of learning logical rules for reasoning on knowledge graphs for completing missing factual triplets. Learning logical rules equips a model with strong interpretability as well as the ability to generalize to similar tasks. We propose a model able to fully use training data which also considers multi-target scenarios. In addition, considering the deficiency in evaluating the performance of models and the quality of mined rules, we further propose two novel indicators to help with the problem. Experimental results empirically demonstrate that our model outperforms state-of-the-art methods on five benchmark datasets. The results also prove the effectiveness of the indicators.
翻译:大型知识图提供了人类知识的结构性表述。然而,由于不可能收集所有知识,知识图通常不完整。基于现有事实的理由为发现缺失的事实铺平了道路。在本文中,我们研究了学习关于知识图推理的逻辑规则以完成缺失的事实三重数据的问题。学习逻辑规则为模型提供了强有力的解释性以及概括类似任务的能力。我们提出了一个能够充分利用也考虑多目标情景的培训数据的模式。此外,考虑到在评价模型绩效和布雷规则质量方面存在的缺陷,我们进一步提出了两个新的指标来帮助解决问题。实验结果从经验上表明,我们的模型在五个基准数据集上优于最先进的方法。结果也证明了指标的有效性。