Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstrated remarkable results in various natural language processing (NLP) tasks with in-context learning, which involves inference based on a few demonstration examples. Despite their successes in NLP tasks, no investigation has been conducted to assess the ability of LLMs to perform document information extraction (DIE) using in-context learning. Applying LLMs to DIE poses two challenges: the modality and task gap. To this end, we propose a simple but effective in-context learning framework called ICL-D3IE, which enables LLMs to perform DIE with different types of demonstration examples. Specifically, we extract the most difficult and distinct segments from hard training documents as hard demonstrations for benefiting all test instances. We design demonstrations describing relationships that enable LLMs to understand positional relationships. We introduce formatting demonstrations for easy answer extraction. Additionally, the framework improves diverse demonstrations by updating them iteratively. Our experiments on three widely used benchmark datasets demonstrate that the ICL-D3IE framework enables GPT-3/ChatGPT to achieve superior performance when compared to previous pre-trained methods fine-tuned with full training in both the in-distribution (ID) setting and in the out-of-distribution (OOD) setting.
翻译:大型语言模型(LLMS),如GPT-3和ChatGPT等大型语言模型(LLMS),在各种自然语言处理(NLP-D3IE)任务中展示了显著的成绩,包括根据几个示范实例进行推断。尽管在NLP任务中取得了成功,但没有进行调查来评估LLMS使用文文本学习进行文件信息提取(DIE)的能力。将LLMS应用到DIE(DIE)带来了两个挑战:方式和任务差距。为此,我们提议了一个简单而有效的文本内学习框架,称为ICL-D3IE(ICL-D3IE),使LLMS能够用不同种类的示范实例进行DIE(DIE)工作。具体地说,我们从硬训练文件中提取最困难和最独特的部分,作为使所有试验实例都受益的硬示范。我们设计了表明LLMS(DI)关系,以便LMS能够理解定位关系。我们引入格式演示来方便解答。此外,通过反复更新三套基准数据集,我们实验显示ICLPT-3/CLPTDDDDD(在前的升级制制制制前的升级),使GDDD-DDDD-D-D-DDDD-D-D-DDD-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-GD-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-GD-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D-D</s>