Neural table-to-text generation models have achieved remarkable progress on an array of tasks. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in real-world applications. To address this, we propose a new framework: Prototype-to-Generate (P2G), for table-to-text generation under the few-shot scenario. The proposed framework utilizes the retrieved prototypes, which are jointly selected by an IR system and a novel prototype selector to help the model bridging the structural gap between tables and texts. Experimental results on three benchmark datasets with three state-of-the-art models demonstrate that the proposed framework significantly improves the model performance across various evaluation metrics.
翻译:在一系列任务上,神经表格到文本生成模型取得了显著进展,然而,由于神经模型的数据饥饿性质,其绩效在很大程度上依赖于大规模培训实例,限制了其在现实世界应用中的适用性。为了解决这个问题,我们提议一个新的框架:在微小设想下,用于表格到文本生成的原型(P2G),用于表格到文本生成。拟议框架利用回收的原型,由IR系统和一个新型原型选择器共同选择,以帮助模型弥合表格和文本之间的结构性差距。三个基准数据集的实验结果显示,三个最先进的模型的实验结果显示,拟议框架大大改善了各种评价指标的模型绩效。