Learning the embeddings of knowledge graphs is vital in artificial intelligence, and can benefit various downstream applications, such as recommendation and question answering. In recent years, many research efforts have been proposed for knowledge graph embedding. However, most previous knowledge graph embedding methods ignore the semantic similarity between the related entities and entity-relation couples in different triples since they separately optimize each triple with the scoring function. To address this problem, we propose a simple yet efficient contrastive learning framework for knowledge graph embeddings, which can shorten the semantic distance of the related entities and entity-relation couples in different triples and thus improve the expressiveness of knowledge graph embeddings. We evaluate our proposed method on three standard knowledge graph benchmarks. It is noteworthy that our method can yield some new state-of-the-art results, achieving 51.2% MRR, 46.8% Hits@1 on the WN18RR dataset, and 59.1% MRR, 51.8% Hits@1 on the YAGO3-10 dataset.
翻译:学习知识图的嵌入在人工智能中至关重要,并有利于各种下游应用,例如建议和答题等。近年来,许多研究努力被提议用于知识图嵌入。然而,大多数先前的知识图嵌入方法忽视了相关实体和实体关系夫妇之间不同三重的语义相似性,因为它们分别以评分功能优化每三重。为了解决这一问题,我们提议了一个简单而高效的对比学习框架,用于知识图嵌入,可以缩短相关实体和实体关系伴侣在不同三重体内的语义距离,从而改进知识图嵌入的清晰度。我们评估了三个标准知识图基基准的拟议方法。值得注意的是,我们的方法可以产生一些最新的结果,实现51.2% MRRR,46.8% Hits@1在WN18RR数据集上,59.1% MRRR,51.8% Hits@1在YAGO3-10数据集上。