Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage. Most existing work on MQG has focused on exploring graph-based networks to equip the traditional Sequence-to-sequence framework with reasoning ability. However, these models do not take full advantage of the constraint between questions and answers. Furthermore, studies on multi-hop question answering (QA) suggest that Transformers can replace the graph structure for multi-hop reasoning. Therefore, in this work, we propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG. It augments the standard BART model with an additional multi-hop QA module to further constrain the generated question. Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models, with an increase of 8 BLEU-4 and 8 ROUGE points compared to the best results previously reported. Our work suggests the advantage of introducing pre-trained language models and QA module for the MQG task.
翻译:多跳问题生成(MQG)旨在产生复杂的问题,要求对输入通道的多种信息进行推理。关于MQG的现有工作大多侧重于探索基于图形的网络,为传统的序列至序列框架配备推理能力。然而,这些模型并不充分利用问答之间的制约。此外,关于多跳问题解答(QA)的研究表明,变换器可以取代多跳推理的图形结构。因此,在这项工作中,我们提出了一个新的框架,即QA4QG,一个基于QA的启动的MQG BART框架。它用额外的多跳式QA模块增强标准BART模型,以进一步限制生成的问题。我们在HotpotQA数据集上的结果显示,QA4QG超越了所有最先进的模型,比以前报告的最佳结果增加了8个BLEU-4和8个ROUGE点。我们的工作表明,引入预先培训的语言模型和MQG任务QA模块的好处。