Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via language models. However, language model-based KG embeddings are usually deployed as static artifacts, which are challenging to modify without re-training after deployment. To address this issue, we propose a new task of editing language model-based KG embeddings in this paper. The proposed task aims to enable data-efficient and fast updates to KG embeddings without damaging the performance of the rest. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hyper network to edit/add facts. Comprehensive experimental results demonstrate that KGEditor can perform better when updating specific facts while not affecting the rest with low training resources. Code and datasets will be available in https://github.com/zjunlp/PromptKG/tree/main/deltaKG.
翻译:近几十年来,通过语言模型构建知识图(KG)嵌入模式取得了经验性的成功,然而,基于语言模型的KG嵌入模式通常被作为静态文物部署,在部署后无需再培训就进行修改是困难的。为解决这一问题,我们提议了一个新的任务,即编辑基于语言模型的KG嵌入模式。拟议任务的目的是在不损害其余内容性能的情况下,使KG嵌入能够实现数据效率和快速更新,同时不破坏其他内容的性能。我们建立了四个新的数据集:E-FB15k237、A-FB15k237、E-WN18RRR和A-WN18RR,并评估了若干知识编辑基线,表明以往模型处理拟议具有挑战性的任务的能力有限。我们进一步提议了一个简单而有力的基线,称为KGEGedditor,利用超网络的更多参数层来编辑/添加事实。全面实验结果显示,KGGEDitor在更新具体事实时能够做得更好,而不会影响其余部分的培训资源。代码和数据集将在https://github.com/zjun/stureG/Projunkmain/Prompkmain.