Knowledge graph (KG) embeddings have been a mainstream approach for reasoning over incomplete KGs. However, limited by their inherently shallow and static architectures, they can hardly deal with the rising focus on complex logical queries, which comprise logical operators, imputed edges, multiple source entities, and unknown intermediate entities. In this work, we present the Knowledge Graph Transformer (kgTransformer) with masked pre-training and fine-tuning strategies. We design a KG triple transformation method to enable Transformer to handle KGs, which is further strengthened by the Mixture-of-Experts (MoE) sparse activation. We then formulate the complex logical queries as masked prediction and introduce a two-stage masked pre-training strategy to improve transferability and generalizability. Extensive experiments on two benchmarks demonstrate that kgTransformer can consistently outperform both KG embedding-based baselines and advanced encoders on nine in-domain and out-of-domain reasoning tasks. Additionally, kgTransformer can reason with explainability via providing the full reasoning paths to interpret given answers.
翻译:知识图( KG) 嵌入是一个主流方法, 用于推理不完全的 KGs 。 但是, 由于其内在的浅质和静态结构, 嵌入知识图( KG) 是一个主流方法, 它们几乎无法应对对复杂逻辑查询的日益关注, 包括逻辑操作员、 估算边缘、 多源实体和未知中间实体。 在这项工作中, 我们用隐蔽的训练前和微调策略展示知识图变异器( kTranser) 。 我们设计了一种 KG 三重转换方法, 使变异器能够处理 KGs 。 通过混合分析器( MIXture- Explorts) 的分散激活, 这种方法得到了进一步的加强。 我们随后将复杂的逻辑查询编成隐蔽的预测, 并引入了两阶段的蒙面培训前战略, 以提高可转移性和可概括性。 在两个基准上进行的广泛实验表明, 千克变异器可以持续超越 KG 嵌入的基线和高级导算器在九项内外的推理任务上。 此外, KG Transforforforforforforexexex 可以通过提供解释的完整的推理路径来解释合理的理由。