We introduce a new dataset for conversational question answering over Knowledge Graphs (KGs) with verbalized answers. Question answering over KGs is currently focused on answer generation for single-turn questions (KGQA) or multiple-tun conversational question answering (ConvQA). However, in a real-world scenario (e.g., voice assistants such as Siri, Alexa, and Google Assistant), users prefer verbalized answers. This paper contributes to the state-of-the-art by extending an existing ConvQA dataset with multiple paraphrased verbalized answers. We perform experiments with five sequence-to-sequence models on generating answer responses while maintaining grammatical correctness. We additionally perform an error analysis that details the rates of models' mispredictions in specified categories. Our proposed dataset extended with answer verbalization is publicly available with detailed documentation on its usage for wider utility.
翻译:我们为回答知识图(KGs)的谈话问题引入了一个新的数据集。对KGs的回答目前侧重于单回合问题(KGQA)或多通对话问题回答(ConvQA)的答案生成。然而,在现实世界中(例如Siri、Alexa和Google A助理等语音助理),用户更喜欢口头回答。本文件通过扩展现有ConvQA数据集,提供多句口头回答,对最新数据做出贡献。我们用五个顺序到顺序的模型进行实验,以生成答案,同时保持语法正确性。我们还要进行错误分析,详细说明特定类别的模型错误率。我们提议的以回答语言扩展的数据集,可以公开提供详细文件,说明其用于更广泛用途的情况。