Multimodal knowledge graph completion (MKGC) aims to predict missing entities in MKGs. Previous works usually share relation representation across modalities. This results in mutual interference between modalities during training, since for a pair of entities, the relation from one modality probably contradicts that from another modality. Furthermore, making a unified prediction based on the shared relation representation treats the input in different modalities equally, while their importance to the MKGC task should be different. In this paper, we propose MoSE, a Modality Split representation learning and Ensemble inference framework for MKGC. Specifically, in the training phase, we learn modality-split relation embeddings for each modality instead of a single modality-shared one, which alleviates the modality interference. Based on these embeddings, in the inference phase, we first make modality-split predictions and then exploit various ensemble methods to combine the predictions with different weights, which models the modality importance dynamically. Experimental results on three KG datasets show that MoSE outperforms state-of-the-art MKGC methods. Codes are available at https://github.com/OreOZhao/MoSE4MKGC.
翻译:多式知识图的完成(MKGC)旨在预测MKGs中缺失的实体。以前的工作通常在不同模式中共享代表关系。这导致培训模式之间的相互干扰,因为对一对实体而言,一种模式的关系可能与另一种模式的关系相矛盾。此外,基于共享关系代表关系的统一预测以不同模式对待投入,而它们对MKG任务的重要性则应当有所不同。在本文件中,我们建议MOSE为MKGC提供一个模式分解代表学习和混合推断框架。具体地说,在培训阶段,我们学习每种模式而不是单一模式共享的一种模式的混合关系,以缓解模式的干扰。基于这些嵌入,在推断阶段,我们首先作出模式分流预测,然后利用各种混合方法将预测与不同重量结合起来,以不同重量为模型,动态地模拟模式的重要性。三种KGGC数据集的实验结果显示,MOSE超越了MKGC4MMGOZ的状态/SESE/MGAG/SECO。在 http://MG/SEAG/MGOZ。