Self-supervised entity alignment (EA) aims to link equivalent entities across different knowledge graphs (KGs) without seed alignments. The current SOTA self-supervised EA method draws inspiration from contrastive learning, originally designed in computer vision based on instance discrimination and contrastive loss, and suffers from two shortcomings. Firstly, it puts unidirectional emphasis on pushing sampled negative entities far away rather than pulling positively aligned pairs close, as is done in the well-established supervised EA. Secondly, KGs contain rich side information (e.g., entity description), and how to effectively leverage those information has not been adequately investigated in self-supervised EA. In this paper, we propose an interactive contrastive learning model for self-supervised EA. The model encodes not only structures and semantics of entities (including entity name, entity description, and entity neighborhood), but also conducts cross-KG contrastive learning by building pseudo-aligned entity pairs. Experimental results show that our approach outperforms previous best self-supervised results by a large margin (over 9% average improvement) and performs on par with previous SOTA supervised counterparts, demonstrating the effectiveness of the interactive contrastive learning for self-supervised EA.
翻译:自我监督的实体对齐(EA)旨在将不同知识图表(KGs)的对应实体连接起来,而没有种子对齐。目前的SOTA自监督EA方法从基于实例歧视和对比损失的计算机视野最初设计成的对比式学习中得到灵感,它有两个缺点。首先,它单向强调将抽样负面实体推远,而不是像在建立良好的监督EA所做的那样将正对对对对对对拉近。第二,KGs包含丰富的侧面信息(例如实体描述),以及如何有效地利用这些信息在自我监督的EA中没有得到充分的调查。在本文件中,我们为自我监督的EA提出了一个交互式对比式学习模式。模型不仅将实体的结构和语义(包括实体名称、实体描述和实体周围)放在一边,而且还通过建立伪对齐的实体对齐的对齐来进行交叉式对比式学习。实验结果显示,我们的方法比以往的最佳自我监督结果要高出很大的边距(超过9%的平均改进率),并且用前SOTA监督的对应方进行自我监督性对比。