A few models have tried to tackle the link prediction problem, also known as knowledge graph completion, by embedding knowledge graphs in comparably lower dimensions. However, the state-of-the-art results are attained at the cost of considerably increasing the dimensionality of embeddings which causes scalability issues in the case of huge knowledge bases. Transformers have been successfully used recently as powerful encoders for knowledge graphs, but available models still have scalability issues. To address this limitation, we introduce a Transformer-based model to gain expressive low-dimensional embeddings. We utilize a large number of self-attention heads as the key to applying query-dependent projections to capture mutual information between entities and relations. Empirical results on WN18RR and FB15k-237 as standard link prediction benchmarks demonstrate that our model has favorably comparable performance with the current state-of-the-art models. Notably, we yield our promising results with a significant reduction of 66.9% in the dimensionality of embeddings compared to the five best recent state-of-the-art competitors on average.
翻译:少数模型试图通过将知识图表以相对较低的维度嵌入知识图解,解决连结预测问题,也称为知识图的完成。然而,以大幅提高嵌入的维度为代价,实现了最先进的结果,在巨大的知识库中,造成可缩缩化问题的嵌入层的维度大大增加了。最近,变异器成功地被用作知识图的强大编码器,但现有模型仍有可缩放问题。为解决这一局限性,我们引入了一个基于变异器的模型,以获得显微的低维度嵌入。我们利用大量自知力头作为关键,应用基于查询的预测来捕捉实体和关系之间的相互信息。WN18RRR和FB15k-237作为标准链接预测基准的经验性结果表明,我们的模型的性能与当前最新模型的可资比较。值得注意的是,我们取得有希望的结果是,与目前最先进的5个最先进的竞争者平均相比,嵌入的维度大幅下降66.9 %。