For knowledge graphs, knowledge graph embedding (KGE) models learn to project the symbolic entities and relations into a low-dimensional continuous vector space based on the observed triplets. However, existing KGE models can not make a proper trade-off between the graph context and the model complexity, which makes them still far from satisfactory. In this paper, we propose a lightweight framework named LightCAKE for context-aware KGE. LightCAKE uses an iterative aggregation strategy to integrate the context information in multi-hop into the entity/relation embeddings, also explicitly models the graph context without introducing extra trainable parameters other than embeddings. Moreover, extensive experiments on public benchmarks demonstrate the efficiency and effectiveness of our framework.
翻译:对于知识图解,知识图嵌入模型学会根据观察到的三胞胎将象征性实体和关系投射到一个低维连续矢量空间,然而,现有的KGE模型无法在图形背景和模型复杂性之间作出适当的权衡,这使得它们仍然远远不能令人满意。在本文件中,我们提议了一个称为LightCake的轻量级框架,用于背景认知KGE。 LightCAKE采用迭接聚合战略,将多重机会背景信息纳入实体/关系嵌入中,还明确模拟图形背景,但不引入除嵌入外的额外可培训参数。此外,关于公共基准的广泛实验显示了我们框架的效率和效力。