Previous knowledge graph embedding approaches usually map entities to representations and utilize score functions to predict the target entities, yet they typically struggle to reason rare or emerging unseen entities. In this paper, we propose kNN-KGE, a new knowledge graph embedding approach with pre-trained language models, by linearly interpolating its entity distribution with k-nearest neighbors. We compute the nearest neighbors based on the distance in the entity embedding space from the knowledge store. Our approach can allow rare or emerging entities to be memorized explicitly rather than implicitly in model parameters. Experimental results demonstrate that our approach can improve inductive and transductive link prediction results and yield better performance for low-resource settings with only a few triples, which might be easier to reason via explicit memory. Code is available at https://github.com/zjunlp/KNN-KG.
翻译:先前的知识图表嵌入方法通常将实体映射成表达方式,并利用评分功能预测目标实体,但通常它们会为稀有或新出现的不可见实体寻找理由。在本文中,我们提议采用KNN-KGE,这是将新的知识图表嵌入预先培训的语言模型的新的知识图表方法,通过直线将实体分布与k-nearest邻居进行内插。我们根据嵌入知识仓库空间的实体的距离计算最近的邻居。我们的方法可以让稀有或新兴实体被明确默化,而不是隐含在模型参数中。实验结果表明,我们的方法可以改进感知和感知链接的预测结果,并为低资源环境产生更好的性能,只有三重语言模型,通过明确的记忆可能更容易理解。代码可在https://github.com/zjunlp/KNN-KG上查阅。