Knowledge graph embedding techniques are widely used for knowledge graph refinement tasks such as graph completion and triple classification. These techniques aim at embedding the entities and relations of a Knowledge Graph (KG) in a low dimensional continuous feature space. This paper adopts a transformer-based triplet network creating an embedding space that clusters the information about an entity or relation in the KG. It creates textual sequences from facts and fine-tunes a triplet network of pre-trained transformer-based language models. It adheres to an evaluation paradigm that relies on an efficient spatial semantic search technique. We show that this evaluation protocol is more adapted to a few-shot setting for the relation prediction task. Our proposed GilBERT method is evaluated on triplet classification and relation prediction tasks on multiple well-known benchmark knowledge graphs such as FB13, WN11, and FB15K. We show that GilBERT achieves better or comparable results to the state-of-the-art performance on these two refinement tasks.
翻译:知识图嵌入技术被广泛用于诸如图形完成和三分分类等知识图改进任务。这些技术旨在将知识图的实体和关系嵌入一个低维连续特征空间。本文采用了一个基于变压器的三重网络,创造一个嵌入空间,将关于一个实体或KG中关系的信息集中在一起。它从事实中产生文字序列,并微调一个由预先训练的变压器为基础的三重语言模型组成的三重网络。它遵循一个依赖高效的空间语义搜索技术的评价模式。我们表明,这一评价协议更适应于关系预测任务的几个镜头。我们提议的GilBERT方法在三重分类和多个众所周知的基准知识图上(如FB13、WN11和FB15K)的关联预测任务上进行了评估。我们显示,GilBERT在这两项改进任务上取得了更好或可比的结果。