Visual information extraction (VIE) plays an important role in Document Intelligence. Generally, it is divided into two tasks: semantic entity recognition (SER) and relation extraction (RE). Recently, pre-trained models for documents have achieved substantial progress in VIE, particularly in SER. However, most of the existing models learn the geometric representation in an implicit way, which has been found insufficient for the RE task since geometric information is especially crucial for RE. Moreover, we reveal another factor that limits the performance of RE lies in the objective gap between the pre-training phase and the fine-tuning phase for RE. To tackle these issues, we propose in this paper a multi-modal framework, named GeoLayoutLM, for VIE. GeoLayoutLM explicitly models the geometric relations in pre-training, which we call geometric pre-training. Geometric pre-training is achieved by three specially designed geometry-related pre-training tasks. Additionally, novel relation heads, which are pre-trained by the geometric pre-training tasks and fine-tuned for RE, are elaborately designed to enrich and enhance the feature representation. According to extensive experiments on standard VIE benchmarks, GeoLayoutLM achieves highly competitive scores in the SER task and significantly outperforms the previous state-of-the-arts for RE (\eg, the F1 score of RE on FUNSD is boosted from 80.35\% to 89.45\%). The code and models are publicly available at https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/GeoLayoutLM
翻译:视觉信息提取(VIE)在文档智能领域中具有重要作用,通常分为两个任务:语义实体识别(SER)和关系抽取(RE)。最近,文档的预训练模型在VIE中取得了实质性进展,特别是在SER方面。然而,现有模型中大多数以隐式方式学习几何表示,这已经被发现对于RE任务来说不足够,因为几何信息对于RE非常关键。此外,我们揭示另一个限制RE性能的因素在于预训练阶段和RE微调阶段之间的目标差距。为了解决这些问题,我们在本文中提出了一种多模式框架GeoLayoutLM,用于VIE。GeoLayoutLM在预训练阶段中明确地建模了几何关系,我们称之为几何预训练。几何预训练是通过三个专门设计的几何相关预训练任务实现的。此外,我们还精心设计了新颖的关系头,这些头通过几何预训练任务进行预训练,并进行细化调整以丰富和增强特征表示。根据对标准VIE基准测试的广泛实验,GeoLayoutLM在SER任务中取得了极具竞争力的分数,并且在RE方面显著优于以前的技术水平(例如,对FUNSD的RE的F1分数从80.35%提高到89.45%)。代码和模型可在 https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/GeoLayoutLM 上公开获取。