Due to the open world assumption, Knowledge Graphs (KGs) are never complete. In order to address this issue, various Link Prediction (LP) methods are proposed so far. Some of these methods are inductive LP models which are capable of learning representations for entities not seen during training. However, to the best of our knowledge, none of the existing inductive LP models focus on learning representations for unseen relations. In this work, a novel Relation Aware Inductive Link preDiction (RAILD) is proposed for KG completion which learns representations for both unseen entities and unseen relations. In addition to leveraging textual literals associated with both entities and relations by employing language models, RAILD also introduces a novel graph-based approach to generate features for relations. Experiments are conducted with different existing and newly created challenging benchmark datasets and the results indicate that RAILD leads to performance improvement over the state-of-the-art models. Moreover, since there are no existing inductive LP models which learn representations for unseen relations, we have created our own baselines and the results obtained with RAILD also outperform these baselines.
翻译:由于开放世界的假设,知识图(KGs)从来就没有完成。为了解决这一问题,迄今已提出了各种链接预测(LP)方法。其中一些方法是能够为培训期间未见的实体学习表现的带式LP模型。然而,据我们所知,现有的带式LP模型中没有任何一种侧重于学习隐形关系的表述的带式LP模型。在这项工作中,为KG的完成建议了一个新的 " 感知诱导联系预演(RALD) " (RALD),该模型既学习了隐形实体的表述,也学习了隐形关系。除了通过使用语言模型利用与两个实体和关系相关的文字文字外,RAILD还引入了一种新的基于图形的方法来生成关系特征。实验是与不同的现有和新创建的具有挑战性的基准数据集进行的,结果显示,RAILD导致与最新模型的绩效改进。此外,由于没有现有的带式LP模型来学习隐形关系和隐形关系的表述,我们建立了自己的基线,与RAILD所获得的结果也超越了这些基线。