Due to its potential for a universal interface over both data and text, data-to-text generation is becoming increasingly popular recently. However, few previous work has focused on its application to downstream tasks, e.g. using the converted data for grounding or reasoning. In this work, we aim to bridge this gap and use the data-to-text method as a means for encoding structured knowledge for knowledge-intensive applications, i.e. open-domain question answering (QA). Specifically, we propose a verbalizer-retriever-reader framework for open-domain QA over data and text where verbalized tables from Wikipedia and triples from Wikidata are used as augmented knowledge sources. We show that our Unified Data and Text QA, UDT-QA, can effectively benefit from the expanded knowledge index, leading to large gains over text-only baselines. Notably, our approach sets the single-model state-of-the-art on Natural Questions. Furthermore, our analyses indicate that verbalized knowledge is preferred for answer reasoning for both adapted and hot-swap settings.
翻译:最近,由于数据与文本具有普遍界面的潜力,数据与文本的生成正在变得日益普及。然而,以前的工作很少侧重于将其应用于下游任务,例如利用转换的数据进行定位或推理。在这项工作中,我们的目标是缩小这一差距,并使用数据与文本的方法,作为知识密集型应用结构化知识的编码手段,即开放式问题解答(QA)。具体地说,我们提议为数据与文本的开放式多盘QA提供口头-检索读取框架,将维基百科和维基数据三倍的口头表格用作扩大的知识来源。我们表明,我们的统一数据和文本QA,UDT-QA, 能够从扩展的知识指数中有效地受益,从而在仅文本基线上获得巨大收益。值得注意的是,我们的方法设置了单一模型的自然问题状态。此外,我们的分析表明,在对适应和热浪环境中的答案推理中,口述知识更受青睐。