Recently, link prediction problem, also known as knowledge graph completion, has attracted lots of researches. Even though there are few recent models tried to attain relatively good performance by embedding knowledge graphs in low dimensions, the best results of the current state-of-the-art models are earned at the cost of considerably increasing the dimensionality of embeddings. However, this causes overfitting and more importantly scalability issues in case of huge knowledge bases. Inspired by the recent advances in deep learning offered by variants of the Transformer model, because of its self-attention mechanism, in this paper we propose a model based on it to address the aforementioned limitation. In our model, self-attention is the key to applying query-dependant projections to entities and relations, and capturing the mutual information between them to gain highly expressive representations from low-dimensional embeddings. Empirical results on two standard link prediction datasets, FB15k-237 and WN18RR, demonstrate that our model achieves favorably comparable or better performance than our three best recent state-of-the-art competitors, with a significant reduction of 76.3% in the dimensionality of embeddings on average.
翻译:最近,连结的预测问题,也称为知识图完成,吸引了许多研究。尽管最近很少一些模型试图通过将知识图表嵌入低维度,实现相对良好的业绩,但目前最先进的模型的最佳成果是以大幅提高嵌入层的维度为代价而取得的。然而,这导致在知识基础庞大的情况下过度适应和更为重要的可扩缩问题。受变异型变异型变异型变异型变异型变异型变异型的深层次学习最近进展的启发,因为我们的自我注意机制,我们在本文件中提出了一个基于它的模式,以解决上述限制。在我们的模式中,自我注意是将查询依赖性的预测应用到实体和关系中的关键,并捕捉它们之间的相互信息,以便从低维嵌入中获取高度清晰的表达。关于两个标准链接预测数据集FB15k-237和WN18RRRM的实证结果显示,我们的模型取得了比我们最近三个最佳的状态竞争者更好的业绩或更好的业绩,在平均水平上大大降低了76.3%。