Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models -- which potentially limits performance. In this work, we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree -- which are common in highly-connected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set -- however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets -- deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across most datasets.
翻译:知识图形的链接预测是预测各实体间缺失关系的任务。 以往关于链接预测的工作侧重于浅度、 快速模型,这些模型可以缩到大型知识图表中。 然而, 这些模型学习的表达性特征比深度、多层模型低,这可能会限制性能。 在这项工作中,我们引入了Conve, 一个多层连结网络模型, 用于链接预测, 并报告若干已建立的数据集的最新结果。 我们还显示,该模型具有很高的参数效率, 产生与DistMult和R-GCN相同的性能, 少了8x和17x参数。 对我们的模型的分析表明, 它在高度的建模节点上特别有效 -- 这些高连通性、多层模型(如Freebase和YAAGO3)是常见的。 此外,我们注意到, WN18 和 FB15k 数据集由于测试数据集的反向关系, 我们发现这一问题的范围远没有被量化。 我们发现这个问题非常严重: 简单的规则和最复杂的数据图表图表在Val 18 的模型中,我们无法在常规模型上找到一些常规数据。