Knowledge graph (KG) embedding aims at learning the latent representations for entities and relations of a KG in continuous vector spaces. An empirical observation is that the head (tail) entities connected by the same relation often share similar semantic attributes -- specifically, they often belong to the same category -- no matter how far away they are from each other in the KG; that is, they share global semantic similarities. However, many existing methods derive KG embeddings based on the local information, which fail to effectively capture such global semantic similarities among entities. To address this challenge, we propose a novel approach, which introduces a set of virtual nodes called \textit{\textbf{relational prototype entities}} to represent the prototypes of the head and tail entities connected by the same relations. By enforcing the entities' embeddings close to their associated prototypes' embeddings, our approach can effectively encourage the global semantic similarities of entities -- that can be far away in the KG -- connected by the same relation. Experiments on the entity alignment and KG completion tasks demonstrate that our approach significantly outperforms recent state-of-the-arts.
翻译:知识图形( KG) 嵌入的目的是了解 KG 实体在连续矢量空间中的潜在代表性和关系。 经验性观察是, 由相同关系连接的顶部( 尾部) 实体通常具有相似的语义属性 -- -- 具体地说, 它们通常属于同一类别 -- -- 不论它们在 KG 中彼此距离有多远, 也就是说, 它们具有全球语义相似性 。 但是, 许多现有方法基于本地信息生成 KG 嵌入, 无法有效捕捉到实体之间的这种全球语义相似性。 为了应对这一挑战, 我们提议了一种新颖的方法, 引入了一套名为\ textitleb{ regal 原型实体的虚拟节点, 以代表由相同关系连接的头部和尾部实体的原型 。 通过强制这些实体将其嵌入与其相关原型相近的嵌入, 我们的方法可以有效地鼓励 KG 实体的全球语义相似性 -- -- 与该关联的远处 -- 。 关于实体的校准和 KG 完成任务的实验表明我们的方法大大超越了最近状态。