Knowledge Graph Completion has been widely studied recently to complete missing elements within triples via mainly modeling graph structural features, but performs sensitive to the sparsity of graph structure. Relevant texts like entity names and descriptions, acting as another expression form for Knowledge Graphs (KGs), are expected to solve this challenge. Several methods have been proposed to utilize both structure and text messages with two encoders, but only achieved limited improvements due to the failure to balance weights between them. And reserving both structural and textual encoders during inference also suffers from heavily overwhelmed parameters. Motivated by Knowledge Distillation, we view knowledge as mappings from input to output probabilities and propose a plug-and-play framework VEM2L over sparse KGs to fuse knowledge extracted from text and structure messages into a unity. Specifically, we partition knowledge acquired by models into two nonoverlapping parts: one part is relevant to the fitting capacity upon training triples, which could be fused by motivating two encoders to learn from each other on training sets; the other reflects the generalization ability upon unobserved queries. And correspondingly, we propose a new fusion strategy proved by Variational EM algorithm to fuse the generalization ability of models, during which we also apply graph densification operations to further alleviate the sparse graph problem. By combining these two fusion methods, we propose VEM2L framework finally. Both detailed theoretical evidence, as well as quantitative and qualitative experiments, demonstrates the effectiveness and efficiency of our proposed framework.
翻译:最近对知识图的完成进行了广泛的研究,主要通过模型图形结构特征,在三进制中完成缺失的元素,主要通过建模图结构特征在三进制中完成,但对图形结构的广度很敏感。相关文本,例如实体名称和描述,作为知识图表(KGs)的另一种表达形式,有望解决这一挑战。建议采用若干方法,利用两个编码器(KGs)来使用结构和文字信息,但由于未能平衡两者的重量,因此只取得了有限的改进。在推断过程中保留结构和文字编码器也存在过重的参数。在知识蒸馏中,我们把知识视为从输出概率到输出概率的映射图的图解图图,我们把关于稀释的 KGs 的插插图框架VEM2L 提出一个插图框架,然后将从文本和结构图中提取的知识整合成一个统一。具体地说,我们将模型获得的知识分为两个不重叠部分:一部分与培训三进制能力有关,通过鼓励两个解算器相互学习训练成套的参数;另一部分反映未经观察的通用查询的概化能力;另一部分,我们在未观察的查询的查询后,我们提议将精化的精化的精化的精化的精化能力,我们把精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化法,我们的精化的精化的精化的精化的精化方法,我们的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化法,我们用法,我们的精化的精细的精细的精化的精化的精化的精化的精化的精化的精化的精化的精化法,我们的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精化的精