Previous knowledge graph embedding approaches usually map entities to representations and utilize score functions to predict the target entities, yet they struggle to reason rare or emerging unseen entities. In this paper, we propose kNN-KGE, a new knowledge graph embedding approach, by linearly interpolating its entity distribution with k-nearest neighbors. We compute the nearest neighbors based on the distance in the entity embedding space from the knowledge store. Our approach can allow rare or emerging entities to be memorized explicitly rather than implicitly in model parameters. Experimental results demonstrate that our approach can improve inductive and transductive link prediction results and yield better performance for low-resource settings with only a few triples, which might be easier to reason via explicit memory.
翻译:先前的知识图表嵌入方法通常将实体映射为代表,并利用评分功能来预测目标实体,但它们却难以解释稀有或新兴的不可见实体。 在本文中,我们建议采用KNN-KGE这一新的知识图表嵌入方法,即通过直线将实体分布与 K 近邻相交,将实体的分布与 K 近邻线上。我们根据实体与知识存储处的嵌入空间的距离计算最近的邻居。我们的方法可以让稀有或新兴实体在模型参数中明确被记忆化,而不是隐含。 实验结果表明,我们的方法可以改进感应和感应链接的预测结果,并在低资源环境中产生更好的性能,只有三重,通过明确的记忆也许更容易理解。