Recently, interest has grown in applying machine learning to the problem of table structure inference and extraction from unstructured documents. However, progress in this area has been challenging both to make and to measure, due to several issues that arise in training and evaluating models from labeled data. This includes challenges as fundamental as the lack of a single definitive ground truth output for each input sample and the lack of an ideal metric for measuring partial correctness for this task. To address these we propose a new dataset, PubMed Tables One Million (PubTables-1M), and a new class of metric, grid table similarity (GriTS). PubTables-1M is nearly twice as large as the previous largest comparable dataset, can be used for models across multiple architectures and modalities, and addresses issues such as ambiguity and lack of consistency in the annotations. We apply DETR to table extraction for the first time and show that object detection models trained on PubTables-1M produce excellent results out-of-the-box for all three tasks of detection, structure recognition, and functional analysis. We describe the dataset in detail to enable others to build on our work and combine this data with other datasets for these and related tasks. It is our hope that PubTables-1M and the proposed metrics can further progress in this area by creating a benchmark suitable for training and evaluating a wide variety of models for table extraction. Data and code will be released at https://github.com/microsoft/table-transformer.
翻译:最近,人们对应用机器学习来解决表格结构推断和从非结构化文件中提取文件的问题越来越感兴趣,然而,由于在培训和评价来自标签数据模型方面出现的若干问题,这一领域的进展在制作和计量方面都具有挑战性,这包括诸如每个输入样本缺乏单一的确定地面真象输出以及缺乏衡量这项任务部分正确性的理想衡量标准等根本性挑战。为了解决这些问题,我们提出了一个新的数据集,即PubMed表100万(PubTables-1M)和新的标准、网格表相似性(GriTS)类别。PubTables-1M几乎是以往最大的可比数据集的两倍,可用于跨多种架构和模式的模型,并可用于解决说明中的模糊性和缺乏一致性等问题。我们首次将DETR用于表格提取,并显示在PubTable Tables-1M上培训对象检测模型为所有三种检测、结构识别和功能分析任务提供了极好的结果。我们描述的数据集详细,以便让其他人能够利用我们先前最大的可比的可比较数据集来建立我们的工作和数据库。