This paper studies learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper proposes a probabilistic model called RNNLogic. RNNLogic treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules. We develop an EM-based algorithm for optimization. In each iteration, the reasoning predictor is first updated to explore some generated logic rules for reasoning. Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step. Experiments on four datasets prove the effectiveness of RNNLogic.
翻译:本文研究知识图的逻辑规则。 逻辑规则在用于预测以及能够概括到其他任务时提供了可解释的解释,因此对于学习至关重要。 现有方法要么在大型搜索空间(例如神经逻辑编程)中遇到搜索问题,要么由于收益稀少(例如基于强化学习的技术)而出现无效优化(例如,基于强化学习的技术),为解决这些限制,本文件提出了一个称为RNNLogic的概率模型。 RENNLogic将逻辑规则作为潜在变量,同时培训规则生成者和逻辑预测师。 我们开发了基于EM的优化算法。 在每次迭代中,推理预测师首先更新,以探索某些生成的逻辑推理规则。 然后在电子步骤中,我们从所有产生的规则中选择一套高质量的规则,既使用规则生成者,又通过后推推推推推推推推推推推推推法; 在M步骤中,规则生成者以电子步骤中选定的规则加以更新。 对四个数据设置的实验证明了 RNNNOLogic的有效性。