Knowledge graph embedding (KGE) has shown great potential in automatic knowledge graph (KG) completion and knowledge-driven tasks. However, recent KGE models suffer from high training cost and large storage space, thus limiting their practicality in real-world applications. To address this challenge, based on the latest findings in the field of Contrastive Learning, we propose a novel KGE training framework called Hardness-aware Low-dimensional Embedding (HaLE). Instead of the traditional Negative Sampling, we design a new loss function based on query sampling that can balance two important training targets, Alignment and Uniformity. Furthermore, we analyze the hardness-aware ability of recent low-dimensional hyperbolic models and propose a lightweight hardness-aware activation mechanism. The experimental results show that in the limited training time, HaLE can effectively improve the performance and training speed of KGE models on five commonly-used datasets. After training just a few minutes, the HaLE-trained models are competitive compared to the state-of-the-art models in both low- and high-dimensional conditions.
翻译:知识图嵌入(KGE)在自动知识图(KG)完成和知识驱动任务方面显示出巨大的潜力。然而,最近的KGE模型受到高培训成本和庞大的存储空间的影响,从而限制了其在现实世界应用中的实用性。为了应对这一挑战,根据在对比学习领域的最新发现,我们提议了一个名为“硬度-觉悟-低维嵌入(Hale)”的新型KGE培训框架。与传统的负度取样相比,我们设计了一种新的损失功能,其依据是查询抽样,可以平衡两个重要的培训目标,即统一性和统一性。此外,我们分析了最近的低度超度模型的硬度-觉悟能力,并提出了一个轻量度硬度-觉激活机制。实验结果表明,在有限的培训时间里,HALE可以有效地提高KGE模型在五个常用数据集上的性能和培训速度。在培训几分钟后,经过HALE培训的模型与低度和高度条件下的先进模型相比具有竞争力。