In this paper, we introduce SciGen, a new challenge dataset for the task of reasoning-aware data-to-text generation consisting of tables from scientific articles and their corresponding descriptions. Describing scientific tables goes beyond the surface realization of the table content and requires reasoning over table values. The unique properties of SciGen are that (1) tables mostly contain numerical values, and (2) the corresponding descriptions require arithmetic reasoning. SciGen is therefore the first dataset that assesses the arithmetic reasoning capabilities of generation models on complex input structures, i.e., tables from scientific articles. We study the effectiveness of state-of-the-art data-to-text generation models on SciGen and evaluate the results using common metrics as well as human evaluation. Our results and analyses show that (a) while humans like to reason for describing scientific tables, the ability of state-of-the-art models is severely limited on this task, (b) while adding more training data improves the results, it is not the solution for reasoning-aware text generation, and (c) one of the main bottlenecks for this task is the lack of proper automatic evaluation metrics. The data, code, and annotations for human evaluation will be available at https://github.com/UKPLab/SciGen. SciGen opens new avenues for future research in reasoning-aware text generation and evaluation.
翻译:在本文中,我们介绍SciGen,这是一个由科学文章及其相应描述的表格组成的推理觉数据到文字生成的新挑战数据集。描述科学表格超出了表内容表面的实现范围,需要比表值推理。SciGen的独特特性是:(1)表格大多包含数字值,(2)相应的描述需要算术推理。SciGen因此是第一个评估复杂投入结构(即科学文章中的表格)模型生成模型的算术推理能力的数据集。我们研究SciGen的最新数据到文字生成模型的有效性,并用通用指标和人类评价来评估结果。我们的结果和分析显示:(a)虽然人类喜欢描述科学表格的理由,但最先进的模型的能力严重地限制了这项任务,(b)在增加更多的培训数据的同时,这不是推理学-认知文本生成的解决方案,以及(c)这项任务的主要瓶颈之一是缺乏适当的自动评估Squuar/将来的流程。Squalge/Squal 数据代码将用来进行新的自动评估。