Tables are often created with hierarchies, but existing works on table reasoning mainly focus on flat tables and neglect hierarchical tables. Hierarchical tables challenge existing methods by hierarchical indexing, as well as implicit relationships of calculation and semantics. This work presents HiTab, a free and open dataset for the research community to study question answering (QA) and natural language generation (NLG) over hierarchical tables. HiTab is a cross-domain dataset constructed from a wealth of statistical reports and Wikipedia pages, and has unique characteristics: (1) nearly all tables are hierarchical, and (2) both target sentences for NLG and questions for QA are revised from high-quality descriptions in statistical reports that are meaningful and diverse. (3) HiTab provides fine-grained annotations on both entity and quantity alignment. Targeting hierarchical structure, we devise a novel hierarchy-aware logical form for symbolic reasoning over tables, which shows high effectiveness. Then given annotations of entity and quantity alignment, we propose partially supervised training, which helps models to largely reduce spurious predictions in the QA task. In the NLG task, we find that entity and quantity alignment also helps NLG models to generate better results in a conditional generation setting. Experiment results of state-of-the-art baselines suggest that this dataset presents a strong challenge and a valuable benchmark for future research.
翻译:表格的创建往往是按等级划分的,但现有的表格推理工作主要侧重于平板表格和忽视的等级表。等级表通过等级指数化以及计算和语义的隐含关系对现行方法提出挑战。这份工作为研究界提供了HiTab, 供研究界在等级表上研究问题解答(QA)和自然语言生成(NLG)的自由和开放的数据集。HiTab是一个跨域数据集,由大量统计报告和维基百科页面组成,具有独特的特点:(1)几乎所有表格都是等级的,(2) NLG和QA的目标句和问题都根据具有实际意义和多样性的统计报告中的高质量说明加以修订。(3) HiTab为实体和数量匹配提供了精细的注释。针对等级结构,我们为表上的象征性推理设计了一个新的分级逻辑格式,显示了高效力。然后,根据实体和数量调整的描述,我们建议进行部分监督的培训,帮助模型在很大程度上减少QA任务中的预测。在NLG任务中,我们发现该实体和数量调整的模型也显示一个更可靠的模型的模型,从而提出一个可靠的模型提出了一个可靠的基准。