Document information extraction tasks performed by humans create data consisting of a PDF or document image input, and extracted string outputs. This end-to-end data is naturally consumed and produced when performing the task because it is valuable in and of itself. It is naturally available, at no additional cost. Unfortunately, state-of-the-art word classification methods for information extraction cannot use this data, instead requiring word-level labels which are expensive to create and consequently not available for many real life tasks. In this paper we propose the Attend, Copy, Parse architecture, a deep neural network model that can be trained directly on end-to-end data, bypassing the need for word-level labels. We evaluate the proposed architecture on a large diverse set of invoices, and outperform a state-of-the-art production system based on word classification. We believe our proposed architecture can be used on many real life information extraction tasks where word classification cannot be used due to a lack of the required word-level labels.
翻译:由人类完成的文档信息提取任务创建数据, 包括 PDF 或文档图像输入, 以及提取的字符串输出 。 这种端到端的数据在执行任务时自然消耗和生成, 因为它本身很宝贵。 它自然是可以得到的, 没有额外的成本。 不幸的是, 最先进的信息提取单词分类方法不能使用这一数据, 而不是需要昂贵的单词级标签来创建, 从而无法用于许多真实生活任务 。 在本文中, 我们建议使用一个深层神经网络模型, 可以直接进行端到端数据的培训, 绕过字级标签的需要 。 我们用大量不同的发票来评估拟议的结构, 并且超越基于字级分类的最先进的生产系统 。 我们相信, 我们拟议的结构可以用于许多真实的生命信息提取任务, 因为缺少所需的字级标签, 无法使用字级分类 。