Users frequently ask simple factoid questions for question answering (QA) systems, attenuating the impact of myriad recent works that support more complex questions. Prompting users with automatically generated suggested questions (SQs) can improve user understanding of QA system capabilities and thus facilitate more effective use. We aim to produce self-explanatory questions that focus on main document topics and are answerable with variable length passages as appropriate. We satisfy these requirements by using a BERT-based Pointer-Generator Network trained on the Natural Questions (NQ) dataset. Our model shows SOTA performance of SQ generation on the NQ dataset (20.1 BLEU-4). We further apply our model on out-of-domain news articles, evaluating with a QA system due to the lack of gold questions and demonstrate that our model produces better SQs for news articles -- with further confirmation via a human evaluation.
翻译:用户经常为问答系统问简单的事实问题,减轻最近许多支持更复杂问题的工作的影响。用自动生成的推荐问题(SQs)提醒用户可以提高用户对质量评估系统能力的了解,从而便利更有效的使用。我们的目标是提出以主要文件专题为重点、以不同长度酌情负责的自我解释的问题。我们通过使用以BERT为基础的、关于自然问题数据集培训的指针-发电机网络满足了这些要求。我们的模型显示SQ生成SOTA在NQ数据集(20.1 BLEU-4)上的SOTA性能。我们进一步应用了我们关于外部新闻文章的模型,由于缺乏金质问题而用QA系统进行评估,并表明我们的模型为新闻文章制作了更好的SQ -- -- 通过人类评价进一步确认。