The task of completing knowledge triplets has broad downstream applications. Both structural and semantic information plays an important role in knowledge graph completion. Unlike previous approaches that rely on either the structures or semantics of the knowledge graphs, we propose to jointly embed the semantics in the natural language description of the knowledge triplets with their structure information. Our method embeds knowledge graphs for the completion task via fine-tuning pre-trained language models with respect to a probabilistic structured loss, where the forward pass of the language models captures semantics and the loss reconstructs structures. Our extensive experiments on a variety of knowledge graph benchmarks have demonstrated the state-of-the-art performance of our method. We also show that our method can significantly improve the performance in a low-resource regime, thanks to the better use of semantics. The code and datasets are available at https://github.com/pkusjh/LASS.
翻译:完成知识三胞胎的任务具有广泛的下游应用。 结构性和语义信息在完成知识图中都起着重要作用。 与以前依靠知识图的结构或语义的方法不同,我们提议将语义与知识三胞胎的自然语言描述及其结构信息共同嵌入知识三胞胎的自然语言描述中。 我们的方法通过在概率结构损失方面微调预先培训的语言模型,将知识图表嵌入完成任务。 语言模型的前传可以捕捉语义学和损失结构重建。 我们在各种知识图基准方面的广泛实验显示了我们方法的最新性能。 我们还表明,由于更好地使用语义学,我们的方法可以大大改善低资源体系的性能。 代码和数据集可以在 https://github.com/pkujh/LASS上查阅。