Knowledge graph embedding (KGE) has been intensively investigated for link prediction by projecting entities and relations into continuous vector spaces. Current popular high-dimensional KGE methods obtain quite slight performance gains while require enormous computation and memory costs. In contrast to high-dimensional KGE models, training low-dimensional models is more efficient and worthwhile for better deployments to practical intelligent systems. However, the model expressiveness of semantic information in knowledge graphs (KGs) is highly limited in the low dimension parameter space. In this paper, we propose iterative self-semantic knowledge distillation strategy to improve the KGE model expressiveness in the low dimension space. KGE model combined with our proposed strategy plays the teacher and student roles alternatively during the whole training process. Specifically, at a certain iteration, the model is regarded as a teacher to provide semantic information for the student. At next iteration, the model is regard as a student to incorporate the semantic information transferred from the teacher. We also design a novel semantic extraction block to extract iteration-based semantic information for the training model self-distillation. Iteratively incorporating and accumulating iteration-based semantic information enables the low-dimensional model to be more expressive for better link prediction in KGs. There is only one model during the whole training, which alleviates the increase of computational expensiveness and memory requirements. Furthermore, the proposed strategy is model-agnostic and can be seamlessly combined with other KGE models. Consistent and significant performance gains in experimental evaluations on four standard datasets demonstrate the effectiveness of the proposed self-distillation strategy.
翻译:通过预测实体和关系到连续矢量空间,对知识嵌入图(KGE)进行了深入调查,以便通过投影实体和关系到连续矢量空间进行链接预测。当前受欢迎的高维KGE方法获得微小的性能增益,同时需要巨大的计算和记忆成本。与高维KGE模型相比,培训低维模型对于更好地部署实用智能系统更有效率和价值。然而,知识图(KGS)中的语义信息的示范表达性在低维度参数空间中非常有限。在本文中,我们提出了一种反复的自我测读知识蒸馏战略,以改善低维度空间的KGE模型的外观性能。与我们拟议的战略相结合,在整个培训过程中,KGGEG模型中,培训低维度模型被视为为学生提供语义信息的教师。接下来,该模型可以被视为一个学生,将从教师传输的语义信息纳入以语义学信息。我们还设计了一个基于超维度空间空间空间空间空间空间模型的静态性精度提取信息。KGGE模型与我们提出的战略相结合,在整个培训模型中,在进行更精确的精确的自我定位化模型中将数据转换到更精确的自我定位化的自我定位化战略。在其中,只有更精确的递化的预化的预化的预化的预化的预化的自我定位,在其中,在其中,将数据在演示的自我定位的自我定位的自我定位,在其中,在其中将演示式的自我判算和预化的自我判读化的自我判算,在其中进行。