Knowledge graph embedding models (KGEMs) are used for various tasks related to knowledge graphs (KGs), including link prediction. They are trained with loss functions that are computed considering a batch of scored triples and their corresponding labels. Traditional approaches consider the label of a triple to be either true or false. However, recent works suggest that all negative triples should not be valued equally. In line with this recent assumption, we posit that semantically valid negative triples might be high-quality negative triples. As such, loss functions should treat them differently from semantically invalid negative ones. To this aim, we propose semantic-driven versions for the three main loss functions for link prediction. In particular, we treat the scores of negative triples differently by injecting background knowledge about relation domains and ranges into the loss functions. In an extensive and controlled experimental setting, we show that the proposed loss functions systematically provide satisfying results on three public benchmark KGs underpinned with different schemas, which demonstrates both the generality and superiority of our proposed approach. In fact, the proposed loss functions do (1) lead to better MRR and Hits@$10$ values, (2) drive KGEMs towards better semantic awareness. This highlights that semantic information globally improves KGEMs, and thus should be incorporated into loss functions. Domains and ranges of relations being largely available in schema-defined KGs, this makes our approach both beneficial and widely usable in practice.
翻译:知识嵌入模型(KGEMS) 用于与知识图形(KGs)有关的各种任务,包括链接预测,这些模型是知识嵌入模型(KGEMS) 。这些模型经过损失功能的培训,在计算损失功能时,考虑到一组分数三重和相应的标签。传统方法认为三重标签是真实的或虚假的。然而,最近的工作表明,所有负三重标签都不应该被同等地估价。根据最近的这一假设,我们认为,具有内在有效性的负三重可能是高质量的负三重。因此,损失功能应不同于词义无效的负三重。因此,我们提议的损失功能应不同于词义无效的负重。为此,我们提出三种主要损失函数的语义驱动版本,用于链接预测的三个主要损失函数。我们特别将三重数负三重的分数以不同的背景知识引入损失函数,将有关领域和范围的背景知识引入损失功能。在广泛和受控制的实验环境中,我们提出的损失函数系统地提供三个公共基准的满意的结果,这三重显示我们拟议方法的普遍性和优越性。事实上的损失函数(1) 使MRRR和H@$$10$$mamamamamamas 成为全球范围内的可靠关系。</s>