Knowledge graphs contain rich relational structures of the world, and thus complement data-driven machine learning in heterogeneous data. One of the most effective methods in representing knowledge graphs is to embed symbolic relations and entities into continuous spaces, where relations are approximately linear translation between projected images of entities in the relation space. However, state-of-the-art relation projection methods such as TransR, TransD or TransSparse do not model the correlation between relations, and thus are not scalable to complex knowledge graphs with thousands of relations, both in computational demand and in statistical robustness. To this end we introduce TransF, a novel translation-based method which mitigates the burden of relation projection by explicitly modeling the basis subspaces of projection matrices. As a result, TransF is far more light weight than the existing projection methods, and is robust when facing a high number of relations. Experimental results on the canonical link prediction task show that our proposed model outperforms competing rivals by a large margin and achieves state-of-the-art performance. Especially, TransF improves by 9%/5% in the head/tail entity prediction task for N-to-1/1-to-N relations over the best performing translation-based method.
翻译:知识图表包含着丰富的世界关系结构,从而补充了数据驱动的机器在不同数据中学习。 代表知识图表的最有效方法之一是将象征性关系和实体嵌入连续空间,空间中各实体的预测图像之间大致线性翻转。然而,诸如TransR、TransD或TransSparse等最先进的关系预测方法并不模拟各种关系之间的相互关系,因此无法与在计算需求和统计稳健性方面有数千种关系的知识图表相适应。为此,我们引入了TransF,这是一种新型的基于翻译的方法,通过对投影矩阵的基础子空间进行明确的建模来减轻关系预测的负担。结果,TransF比现有预测方法的轻得多,在面临大量关系时,它具有很强的强度。 星际联系预测任务的实验结果显示,我们提议的模型比相竞争的对手要差大得多,并且达到了最新的业绩。 特别是, TransF将头版/尾部实体对N-1-1的预测方法改进了9%/5%。