Over the past decade, knowledge graphs became popular for capturing structured domain knowledge. Relational learning models enable the prediction of missing links inside knowledge graphs. More specifically, latent distance approaches model the relationships among entities via a distance between latent representations. Translating embedding models (e.g., TransE) are among the most popular latent distance approaches which use one distance function to learn multiple relation patterns. However, they are not capable of capturing symmetric relations. They also force relations with reflexive patterns to become symmetric and transitive. In order to improve distance based embedding, we propose multi-distance embeddings (MDE). Our solution is based on the idea that by learning independent embedding vectors for each entity and relation one can aggregate contrasting distance functions. Benefiting from MDE, we also develop supplementary distances resolving the above-mentioned limitations of TransE. We further propose an extended loss function for distance based embeddings and show that MDE and TransE are fully expressive using this loss function. Furthermore, we obtain a bound on the size of their embeddings for full expressivity. Our empirical results show that MDE significantly improves the translating embeddings and outperforms several state-of-the-art embedding models on benchmark datasets.
翻译:过去十年来,知识图表在获取结构化域域知识方面变得很受欢迎。 关系学习模型能够预测知识图中缺失的环节。 更具体地说, 潜伏距离方法通过潜伏代表之间的距离来模拟实体之间的关系。 转换嵌入模型(例如TransE)是最受欢迎的潜在距离方法,它们使用一个距离函数来学习多重关系模式。 但是,它们无法捕捉对称关系。 它们也迫使与反射模式的关系成为对称和中转性。 为了改进基于距离的嵌入,我们提议多距离嵌入(MDE)。 我们的解决方案基于这样的想法,即通过学习每个实体的独立嵌入矢体,而一个实体之间的关系可以聚合对比距离功能。 从MDE中受益,我们还开发了补充距离,解决上述TransE的局限性。 我们还提议,基于嵌入的距离的延长损失功能可以显示MDE和TransE完全清晰地显示使用这一损失功能。 此外,我们获得了关于它们嵌入的大小的约束,以完全直观性为主。 我们的实验结果显示,MDE的嵌入模型将大大改进了几个基模型。