Relational tables on the Web store a vast amount of knowledge. Owing to the wealth of such tables, there has been tremendous progress on a variety of tasks in the area of table understanding. However, existing work generally relies on heavily-engineered task-specific features and model architectures. In this paper, we present TURL, a novel framework that introduces the pre-training/fine-tuning paradigm to relational Web tables. During pre-training, our framework learns deep contextualized representations on relational tables in an unsupervised manner. Its universal model design with pre-trained representations can be applied to a wide range of tasks with minimal task-specific fine-tuning. Specifically, we propose a structure-aware Transformer encoder to model the row-column structure of relational tables, and present a new Masked Entity Recovery (MER) objective for pre-training to capture the semantics and knowledge in large-scale unlabeled data. We systematically evaluate TURL with a benchmark consisting of 6 different tasks for table understanding (e.g., relation extraction, cell filling). We show that TURL generalizes well to all tasks and substantially outperforms existing methods in almost all instances.
翻译:网络上的关系表存储了大量知识。由于这些表的丰富,在表格理解领域的各种任务方面取得了巨大进展。然而,现有工作一般依赖于设计繁重的任务特点和模型结构。本文介绍TURL,这是一个将培训前/调整模式引入关系网表格的新框架。在培训前,我们的框架以不受监督的方式在关系表上学习了深层次背景化的表述。其具有预先培训的表述面貌的普遍模式设计可应用于一系列任务,且任务特定微调极少。具体地说,我们提议一个结构有觉悟的变异器编码器,以模拟关系表的行边结构,并提出一个新的蒙面实体恢复(MER)目标,用于培训前,以获取大规模无标签数据中的语义和知识。我们系统地评估TUR,其基准包括6项不同的表格理解任务(例如,相关提取、单元格填充)。我们显示,TURL在几乎所有情况下,都对任务和现有方法都进行了概括。