Complex Query Answering (CQA) over Knowledge Graphs (KGs) has attracted a lot of attention to potentially support many applications. Given that KGs are usually incomplete, neural models are proposed to answer the logical queries by parameterizing set operators with complex neural networks. However, such methods usually train neural set operators with a large number of entity and relation embeddings from the zero, where whether and how the embeddings or the neural set operators contribute to the performance remains not clear. In this paper, we propose a simple framework for complex query answering that decomposes the KG embeddings from neural set operators. We propose to represent the complex queries into the query graph. On top of the query graph, we propose the Logical Message Passing Neural Network (LMPNN) that connects the local one-hop inferences on atomic formulas to the global logical reasoning for complex query answering. We leverage existing effective KG embeddings to conduct one-hop inferences on atomic formulas, the results of which are regarded as the messages passed in LMPNN. The reasoning process over the overall logical formulas is turned into the forward pass of LMPNN that incrementally aggregates local information to finally predict the answers' embeddings. The complex logical inference across different types of queries will then be learned from training examples based on the LMPNN architecture. Theoretically, our query-graph represenation is more general than the prevailing operator-tree formulation, so our approach applies to a broader range of complex KG queries. Empirically, our approach yields the new state-of-the-art neural CQA model. Our research bridges the gap between complex KG query answering tasks and the long-standing achievements of knowledge graph representation learning.
翻译:对知识图( KGs) 的复杂查询解答( CQA) 吸引了大量关注, 以潜在支持许多应用程序。 鉴于 KGs 通常不完全, 提议神经模型, 以使用复杂的神经网络对设置的操作员进行参数化的逻辑查询。 然而, 这些方法通常将神经设置操作员与大量实体和从零点嵌入关系, 是否嵌入或如何嵌入神经系统操作员对业绩的贡献仍然不清楚。 在本文中, 我们提议一个简单的复杂查询框架, 用于对 KG 嵌入神经设置操作员的嵌入进行解析。 鉴于 KG 通常不完整, 我们提议在查询图中代表复杂的查询。 在查询图中, 我们提议逻辑信息传递的逻辑信息网络( LMPNN) 将原子公式的本地一号推论与复杂解答的全球逻辑推论联系起来。 我们利用现有的有效 KG 嵌入的模型在原子公式上进行一线性推论, 其结果被视为在 LMPNNNN中传递的信息 。 逻辑推算过程将最终将我们整个逻辑公式的逻辑分析结果转换成了 LMDal 。