The multi-modal entity alignment (MMEA) aims to find all equivalent entity pairs between multi-modal knowledge graphs (MMKGs). Rich attributes and neighboring entities are valuable for the alignment task, but existing works ignore contextual gap problems that the aligned entities have different numbers of attributes on specific modality when learning entity representations. In this paper, we propose a novel attribute-consistent knowledge graph representation learning framework for MMEA (ACK-MMEA) to compensate the contextual gaps through incorporating consistent alignment knowledge. Attribute-consistent KGs (ACKGs) are first constructed via multi-modal attribute uniformization with merge and generate operators so that each entity has one and only one uniform feature in each modality. The ACKGs are then fed into a relation-aware graph neural network with random dropouts, to obtain aggregated relation representations and robust entity representations. In order to evaluate the ACK-MMEA facilitated for entity alignment, we specially design a joint alignment loss for both entity and attribute evaluation. Extensive experiments conducted on two benchmark datasets show that our approach achieves excellent performance compared to its competitors.
翻译:多模态实体对齐(MMEA)的目标是在多模态知识图(MMKG)之间找到所有等效的实体对。在对齐任务中,丰富的属性和相邻实体对于对齐任务非常有价值,但现有的工作忽略了当学习实体表示时对齐的实体在特定模态上具有不同数量属性的情况,这产生了上下文差距问题。本文提出了一种新颖的针对MMEA的属性一致性知识图表示学习框架(ACK-MMEA),通过整合一致对齐知识来弥补上下文差距问题。首先,使用合并和生成操作通过多模态属性一致化构建属性一致性知识图(ACKG),以使每个实体在每个模态下都具有一个统一的特征。然后将ACKG输入关系感知的图神经网络,通过随机丢弃来获取聚合关系表示和鲁棒的实体表示。为了评估ACK-MMEA协助实体对齐的能力,我们特别设计了适用于实体和属性评估的联合对齐损失。在两个基准数据集上进行的大量实验表明,与竞争对手相比,我们的方法表现出了出色的性能。