Knowledge graph embedding (KGE) is a increasingly popular technique that aims to represent entities and relations of knowledge graphs into low-dimensional semantic spaces for a wide spectrum of applications such as link prediction, knowledge reasoning and knowledge completion. In this paper, we provide a systematic review of existing KGE techniques based on representation spaces. Particularly, we build a fine-grained classification to categorise the models based on three mathematical perspectives of the representation spaces: (1) Algebraic perspective, (2) Geometric perspective, and (3) Analytical perspective. We introduce the rigorous definitions of fundamental mathematical spaces before diving into KGE models and their mathematical properties. We further discuss different KGE methods over the three categories, as well as summarise how spatial advantages work over different embedding needs. By collating the experimental results from downstream tasks, we also explore the advantages of mathematical space in different scenarios and the reasons behind them. We further state some promising research directions from a representation space perspective, with which we hope to inspire researchers to design their KGE models as well as their related applications with more consideration of their mathematical space properties.
翻译:知识嵌入图(KGE)是一种越来越受欢迎的技术,旨在将知识图的实体和关系代表到低维的语义空间,用于广泛的应用,例如链接预测、知识推理和知识完成等。在本文件中,我们系统地审查基于代表空间的现有KGE技术。特别是,我们根据代表空间的三种数学视角构建细微分类,将模型分类:(1) 代数视角,(2) 几何视角,(3) 分析视角。我们引入了基本数学空间的严格定义,然后跳入KGE模型及其数学属性。我们进一步讨论了KGE对三类应用的不同方法,并总结了空间优势对不同嵌入需求的作用。通过整理下游任务的实验结果,我们还探索了数学空间在不同情景中的优势及其背后的原因。我们进一步从代表空间视角阐述一些有希望的研究方向,我们希望以此激励研究人员设计其KGE模型及其相关应用,并更多地考虑其数学空间属性。