Performing link prediction using knowledge graph embedding models has become a popular approach for knowledge graph completion. Such models employ a transformation function that maps nodes via edges into a vector space in order to measure the likelihood of the links. While mapping the individual nodes, the structure of subgraphs is also transformed. Most of the embedding models designed in Euclidean geometry usually support a single transformation type - often translation or rotation, which is suitable for learning on graphs with small differences in neighboring subgraphs. However, multi-relational knowledge graphs often include multiple sub-graph structures in a neighborhood (e.g. combinations of path and loop structures), which current embedding models do not capture well. To tackle this problem, we propose a novel KGE model (5*E) in projective geometry, which supports multiple simultaneous transformations - specifically inversion, reflection, translation, rotation, and homothety. The model has several favorable theoretical properties and subsumes the existing approaches. It outperforms them on the most widely used link prediction benchmarks
翻译:使用知识图形嵌入模型进行连接预测已成为一种普及的知识图形完成方法。这些模型使用一种转换功能,通过边缘将节点映射成矢量空间,以测量链接的可能性。在绘制单个节点时,子图的结构也发生了转变。在欧几何中设计的多数嵌入模型通常支持单一转换类型 -- -- 通常是翻译或旋转,这适合于在相邻子图上学习,在相邻子图中差异较小。然而,多关系知识图往往包含邻里多个子绘图结构(例如路径和环形结构的组合),而当前嵌入模型没有很好地捕捉到这些结构。为了解决这个问题,我们提议在投影几何测量中采用一个新的 KGE 模型(5*E), 支持多个同时变换, 特别是反射、 翻译、 旋转和同性。该模型有若干有利的理论属性, 子合成了现有方法。 它在最广泛使用的链接预测基准上超越了这些结构。