Multiple business scenarios require an automated generation of descriptive human-readable text from structured input data. Hence, fact-to-text generation systems have been developed for various downstream tasks like generating soccer reports, weather and financial reports, medical reports, person biographies, etc. Unfortunately, previous work on fact-to-text (F2T) generation has focused primarily on English mainly due to the high availability of relevant datasets. Only recently, the problem of cross-lingual fact-to-text (XF2T) was proposed for generation across multiple languages alongwith a dataset, XALIGN for eight languages. However, there has been no rigorous work on the actual XF2T generation problem. We extend XALIGN dataset with annotated data for four more languages: Punjabi, Malayalam, Assamese and Oriya. We conduct an extensive study using popular Transformer-based text generation models on our extended multi-lingual dataset, which we call XALIGNV2. Further, we investigate the performance of different text generation strategies: multiple variations of pretraining, fact-aware embeddings and structure-aware input encoding. Our extensive experiments show that a multi-lingual mT5 model which uses fact-aware embeddings with structure-aware input encoding leads to best results on average across the twelve languages. We make our code, dataset and model publicly available, and hope that this will help advance further research in this critical area.
翻译:多种商业情景要求从结构化投入数据中自动生成描述性的人读文字,因此,为产生足球报告、天气和财务报告、医疗报告、个人传记等各种下游任务开发了事实到文字生成系统。 不幸的是,以往关于事实到文字(F2T)生成的工作主要侧重于英语,这主要是因为相关数据集的可用性很高。直到最近,才提议在多种语言中生成跨语言的跨语言事实到文字(XALIGIV)问题,同时提供数据集,即八种语言的XALIGN。然而,在实际的 XF2T生成问题上没有开展严格的工作。我们为另外四种语言(旁遮普语、马拉亚拉姆语、阿萨姆语和奥里亚语)提供附加附加说明的数据集。我们利用基于大众变异器的文本生成模型(XALF2TT)进行广泛研究。我们调查了不同文本生成战略的绩效:预先培训的多种变异,事实认知嵌入和结构-意识结构生成问题。我们广泛开展的XALIGISA实验将利用我们的标准数据系统,从而实现这一平均版本数据。