Data-to-text (D2T) and text-to-data (T2D) are dual tasks that convert structured data, such as graphs or tables into fluent text, and vice versa. These tasks are usually handled separately and use corpora extracted from a single source. Current systems leverage pre-trained language models fine-tuned on D2T or T2D tasks. This approach has two main limitations: first, a separate system has to be tuned for each task and source; second, learning is limited by the scarcity of available corpora. This paper considers a more general scenario where data are available from multiple heterogeneous sources. Each source, with its specific data format and semantic domain, provides a non-parallel corpus of text and structured data. We introduce a variational auto-encoder model with disentangled style and content variables that allows us to represent the diversity that stems from multiple sources of text and data. Our model is designed to handle the tasks of D2T and T2D jointly. We evaluate our model on several datasets, and show that by learning from multiple sources, our model closes the performance gap with its supervised single-source counterpart and outperforms it in some cases.
翻译:数据到文字(D2T)和文本到数据(T2D)是双重任务,将结构化数据,如图表或表格等结构化数据转换成流畅文本,反之亦然。这些任务通常单独处理,并使用从单一来源提取的Cororora。当前系统利用了根据D2T或T2D任务微调调整的预先培训的语言模型。这个方法有两个主要的局限性:首先,必须针对每项任务和来源调整一个单独的系统;其次,学习因缺少可用的公司而受到限制。本文考虑的是从多种不同来源获得数据的更为一般的设想。每个来源,及其具体的数据格式和语义域,都提供非平行的文本和结构化数据。我们引入了一个可变式自动编码模型,其样式和内容变量相互交错,使我们能够代表来自多种文本和数据来源的多样性。我们的模型旨在共同处理D2T和T2D的任务。我们从多个数据集中评估了我们的模型,并通过从多个来源学习来显示,我们的模型缩小了我们模式的运行差距,同时从一个监督的单个软件的对立项。