Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines. Code and datasets are available in https://github.com/zjunlp/DeepKE/tree/main/example/llm.
翻译:目前的生成式知识图谱构建方法通常无法将结构化知识捕捉到序列化文本或规范语言之中,因此常常难以理解生成的知识图谱。然而,针对结构预测和推理任务进行训练的大型代码生成语言模型在理解自然语言方面表现出了出色的能力。因此,我们提出一种使用代码语言模型的方法来实现生成式知识图谱构建的任务:给定一个自然语言的代码格式输入,目标是生成可以表示为代码补全任务的三元组。具体而言,我们开发了能够有效利用知识图谱中语义结构的模式感知提示。由于代码本质上有结构,例如类和函数定义,因此它作为先前的语义结构知识模型非常有用。此外,我们采用了一种提高性能的理由增强生成方法。理由提供中间步骤,从而提高了知识提取能力。实验结果表明,与基线相比,所提出的方法在基准数据集上可以获得更好的性能。代码和数据集可在https://github.com/zjunlp/DeepKE/tree/main/example/llm 中获取。