Knowledge graph question generation (QG) aims to generate natural language questions from KG and target answers. Most previous works mainly focusing on the simple setting are to generate questions from a single KG triple. In this work, we focus on a more realistic setting, where we aim to generate questions from a KG subgraph and target answers. In addition, most of previous works built on either RNN-based or Transformer-based models to encode a KG sugraph, which totally discard the explicit structure information contained in a KG subgraph. To address this issue, we propose to apply a bidirectional Graph2Seq model to encode the KG subgraph. In addition, we enhance our RNN decoder with node-level copying mechanism to allow directly copying node attributes from the input graph to the output question. We also explore different ways of initializing node/edge embeddings and handling multi-relational graphs. Our model is end-to-end trainable and achieves new state-of-the-art scores, outperforming existing methods by a significant margin on the two benchmarks.
翻译:知识图形问题生成( QG) 旨在生成来自 KG 的自然语言问题和目标解答。 大部分先前的工作主要侧重于简单设置, 是从一个 KG 三进制生成问题。 在这项工作中, 我们侧重于一个更现实的设置, 我们的目标是从一个 KG 子图和目标解答中生成问题。 此外, 大部分以前的工作都建在基于 RNN 的模型或基于变换器的模型上, 以编码一个 KG 子图解中包含的清晰结构信息。 为了解决这个问题, 我们提议应用双向图形2Seq 模型来编码 KG 子图解。 此外, 我们用节点级复制机制加强我们的 RNN 解码程序, 以便直接复制输入图中的节点属性到输出问题。 我们还探索了初始化节点/ 嵌入和处理多关系图的不同方法。 我们的模式是端到端可训练的, 并实现新的状态评分, 在两个基准上以显著的差值比重来完成现有方法 。