Knowledge graphs are important resources for many artificial intelligence tasks but often suffer from incompleteness. In this work, we propose to use pre-trained language models for knowledge graph completion. We treat triples in knowledge graphs as textual sequences and propose a novel framework named Knowledge Graph Bidirectional Encoder Representations from Transformer (KG-BERT) to model these triples. Our method takes entity and relation descriptions of a triple as input and computes scoring function of the triple with the KG-BERT language model. Experimental results on multiple benchmark knowledge graphs show that our method can achieve state-of-the-art performance in triple classification, link prediction and relation prediction tasks.
翻译:知识图表是许多人工智能任务的重要资源,但往往不完全。 在这项工作中,我们提议使用预先培训的语言模型完成知识图表。我们把知识图表中的三倍作为文字序列处理,并提出一个新的框架,称为“变异器(KG-BERT)知识图双向编码器演示”来模拟这些三重。我们的方法将实体和关系描述作为输入,并计算与KG-BERT语言模型的三重的评分功能。多基准知识图表的实验结果显示,我们的方法可以在三重分类、连接预测和关系预测任务方面达到最先进的业绩。