Table-to-text generation aims at automatically generating text to help people conveniently obtain salient information in tables. Recent works explicitly decompose the generation process into content planning and surface generation stages, employing two autoregressive networks for them respectively. However, they are computationally expensive due to the non-parallelizable nature of autoregressive decoding and the redundant parameters of two networks. In this paper, we propose the first totally non-autoregressive table-to-text model (Plan-then-Seam, PTS) that produces its outputs in parallel with one single network. PTS firstly writes and calibrates one plan of the content to be generated with a novel rethinking pointer predictor, and then takes the plan as the context for seaming to decode the description. These two steps share parameters and perform iteratively to capture token inter-dependency while keeping parallel decoding. Experiments on two public benchmarks show that PTS achieves 3.0~5.6 times speedup for inference time, reducing 50% parameters, while maintaining as least comparable performance against strong two-stage table-to-text competitors.
翻译:表格对文本的生成旨在自动生成文本,帮助人们方便地获取表格中的突出信息。最近的工作明确将生成过程分解成内容规划和地表生成阶段,并分别使用两个自动递减网络。然而,由于自动递减解码和两个网络的冗余参数无法相互兼容的性质,它们计算成本很高。在本文件中,我们提议了第一个完全不偏向的表格对文本模型(Seam、PTS),该模型与一个单一网络同时产生其产出。PTS首先编写并校准了将生成的内容中的一项计划,用一个新的重新思考点预测器进行校准,然后将该计划作为解码描述的背景。这两个步骤共享参数并进行迭接,以捕捉象征性的相互依存性,同时保持解码。对两个公共基准的实验表明,PTS在推算时间方面达到3.0~5.6倍的加速度,同时将50%的参数保持在与两阶段表对文本竞争者最强的可比较性。