Knowledge Graph Completion (KGC) has been recently extended to multiple knowledge graph (KG) structures, initiating new research directions, e.g. static KGC, temporal KGC and few-shot KGC. Previous works often design KGC models closely coupled with specific graph structures, which inevitably results in two drawbacks: 1) structure-specific KGC models are mutually incompatible; 2) existing KGC methods are not adaptable to emerging KGs. In this paper, we propose KG-S2S, a Seq2Seq generative framework that could tackle different verbalizable graph structures by unifying the representation of KG facts into "flat" text, regardless of their original form. To remedy the KG structure information loss from the "flat" text, we further improve the input representations of entities and relations, and the inference algorithm in KG-S2S. Experiments on five benchmarks show that KG-S2S outperforms many competitive baselines, setting new state-of-the-art performance. Finally, we analyze KG-S2S's ability on the different relations and the Non-entity Generations.
翻译:最近,知识图补全(KGC)已扩大到多个知识图(KG)结构,启动新的研究方向,例如静态KGC、时间 KGC和几发KGC。以前的工作经常设计KGC模型,同时设计具体的图形结构,这不可避免地造成两个缺点:(1) 结构上特定KGC模型互不兼容;(2) 现有的KGC方法不适应新兴KGs。在本文中,我们提议一个Seq2Seq2Sequal 基因化框架,通过将KG事实的表达方式统一成“缩放”文字来处理不同的可言语图形结构。为了从“缩放”文字中纠正KG结构中的信息损失,我们进一步改进了实体和关系中的输入表达方式,以及KG-S2S的推论算法。对五个基准的实验显示,KG-S2S超越了许多竞争性基线,确定了新的状态性能。最后,我们分析了KG-S2S在不同关系和非实体世代中的能力。