The output structure of database-like tables, consisting of values structured in horizontal rows and vertical columns identifiable by name, can cover a wide range of NLP tasks. Following this constatation, we propose a framework for text-to-table neural models applicable to problems such as extraction of line items, joint entity and relation extraction, or knowledge base population. The permutation-based decoder of our proposal is a generalized sequential method that comprehends information from all cells in the table. The training maximizes the expected log-likelihood for a table's content across all random permutations of the factorization order. During the content inference, we exploit the model's ability to generate cells in any order by searching over possible orderings to maximize the model's confidence and avoid substantial error accumulation, which other sequential models are prone to. Experiments demonstrate a high practical value of the framework, which establishes state-of-the-art results on several challenging datasets, outperforming previous solutions by up to 15%.
翻译:数据库式表格的输出结构由水平行和以名称可识别的垂直列组成的数值组成,可以涵盖一系列广泛的NLP任务。在此聚合之后,我们提出了一个适用于诸如提取线条项目、联合实体和关系提取或知识基群等问题的文本到可调制神经模型的框架。我们提案的基于变换的解码器是一种通用的顺序方法,它包含来自表格中所有单元格的信息。培训使表格内容的预期日志-相似性最大化,覆盖系数化顺序的所有随机变异。在内容推断过程中,我们利用模型的能力生成任何顺序的单元格,通过搜索可能的命令来最大限度地增强模型的信心,避免重大错误积累,而其他相继模型则容易发生。实验显示了框架的高实用价值,在几个富有挑战性的数据集上确定了最先进的结果,比先前的解决方案高出了15%。