Knowledge graph embedding aims to represent entities and relations in a large-scale knowledge graph as elements in a continuous vector space. Existing methods, e.g., TransE and TransH, learn embedding representation by defining a global margin-based loss function over the data. However, the optimal loss function is determined during experiments whose parameters are examined among a closed set of candidates. Moreover, embeddings over two knowledge graphs with different entities and relations share the same set of candidate loss functions, ignoring the locality of both graphs. This leads to the limited performance of embedding related applications. In this paper, we propose a locally adaptive translation method for knowledge graph embedding, called TransA, to find the optimal loss function by adaptively determining its margin over different knowledge graphs. Experiments on two benchmark data sets demonstrate the superiority of the proposed method, as compared to the-state-of-the-art ones.
翻译:知识图嵌入的目的是在大规模知识图中代表实体和关系,作为连续矢量空间的元素。现有方法,例如TransE和TransH,通过界定数据上的全球边际损失函数来学习嵌入代表。然而,最佳损失功能是在试验中确定,其参数由一组封闭的候选者加以审查。此外,两个知识图与不同实体和关系中,嵌入两个不同的候选损失函数相同,忽略两个图的位置。这导致嵌入相关应用程序的性能有限。在本文件中,我们提议了一个知识图嵌入本地适应翻译方法,称为TransA,通过适应性地决定其相对于不同知识图的边际来找到最佳损失函数。两个基准数据集的实验表明,与最新数据相比,拟议方法的优越性。