We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.
翻译:我们研究了在知识图中学习实体和关系的表述方式以预测缺失环节的知识图解中学习实体和关系的问题,这一任务的成功在很大程度上取决于建模和推断关系模式(或关系之间)的能力,在本文件中,我们提出了一个新的知识图嵌入知识图嵌入方法,称为罗塔特,它能够建模和推断各种关系模式,包括:对称/对称、倒置和构成。具体地说,罗塔特模型将每一种关系界定为来源实体与复杂矢量空间中目标实体的轮换。此外,我们提出了一种新的自我对抗负面抽样技术,以高效和有效地培训罗塔特模型。多基准知识图的实验结果显示,拟议的罗塔特模型不仅可以缩放,而且能够推算和建模各种关系模式,并大大优于现有最先进的连接预测模型。