Recently, knowledge representation learning (KRL) is emerging as the state-of-the-art approach to process queries over knowledge graphs (KGs), wherein KG entities and the query are embedded into a latent space such that entities that answer the query are embedded close to the query. Yet, despite the intensive research on KRL, most existing studies either focus on homogenous KGs or assume KG completion tasks (i.e., inference of missing facts), while answering complex logical queries over KGs with multiple aspects (multi-view KGs) remains an open challenge. To bridge this gap, in this paper, we present ROMA, a novel KRL framework for answering logical queries over multi-view KGs. Compared with the prior work, ROMA departs in major aspects. (i) It models a multi-view KG as a set of overlaying sub-KGs, each corresponding to one view, which subsumes many types of KGs studied in the literature (e.g., temporal KGs). (ii) It supports complex logical queries with varying relation and view constraints (e.g., with complex topology and/or from multiple views); (iii) It scales up to KGs of large sizes (e.g., millions of facts) and fine-granular views (e.g., dozens of views); (iv) It generalizes to query structures and KG views that are unobserved during training. Extensive empirical evaluation on real-world KGs shows that \system significantly outperforms alternative methods.
翻译:最近,知识代表学习(KRL)正在成为处理知识图表查询的最先进方法(多视图KGs),KG实体和查询被嵌入一个潜在空间,让回答查询的实体与查询相近。然而,尽管对KRL进行了深入的研究,大多数现有研究要么侧重于同质的KGs,要么承担KG的完成任务(即推断缺失的事实),同时用多种方面(多视图KGs)回答对KG的复杂逻辑查询,这仍然是一个公开的挑战。为了缩小这一差距,我们在本文件中提出ROMA,这是一个用于回答多视图KGs的逻辑查询的新型KRL框架。与先前的工作相比,ROMA在主要方面有所偏离。 (一) 将多视图KGs作为覆盖子Gs系列的模型,每组对应一种观点,在文献中研究的许多类型的KGs(例如时间 KGs)。 (二) 它支持复杂的逻辑查询,从多种关系和复杂程度的KG级结构(如:从KGsroals 直观到Ks)。