Despite the fact that large-scale Language Models (LLM) have achieved SOTA performances on a variety of NLP tasks, its performance on NER is still significantly below supervised baselines. This is due to the gap between the two tasks the NER and LLMs: the former is a sequence labeling task in nature while the latter is a text-generation model. In this paper, we propose GPT-NER to resolve this issue. GPT-NER bridges the gap by transforming the sequence labeling task to a generation task that can be easily adapted by LLMs e.g., the task of finding location entities in the input text "Columbus is a city" is transformed to generate the text sequence "@@Columbus## is a city", where special tokens @@## marks the entity to extract. To efficiently address the "hallucination" issue of LLMs, where LLMs have a strong inclination to over-confidently label NULL inputs as entities, we propose a self-verification strategy by prompting LLMs to ask itself whether the extracted entities belong to a labeled entity tag. We conduct experiments on five widely adopted NER datasets, and GPT-NER achieves comparable performances to fully supervised baselines, which is the first time as far as we are concerned. More importantly, we find that GPT-NER exhibits a greater ability in the low-resource and few-shot setups, when the amount of training data is extremely scarce, GPT-NER performs significantly better than supervised models. This demonstrates the capabilities of GPT-NER in real-world NER applications where the number of labeled examples is limited.
翻译:尽管大规模语言模型 (LLM) 在各种自然语言处理任务中取得了 SOTA 的性能,但它在命名实体识别 (NER) 上的性能仍然明显低于监督基准。这是因为 NER 和 LLM 之间存在巨大的“鸿沟”:前者本质上是一个序列标注任务,而后者是一个文本生成模型。在本文中,我们提出了 GPT-NER 来解决这个问题。GPT-NER 通过将序列标注任务转化为可被 LLMs 轻松适配的生成任务来弥合这个差距,例如,将在输入文本“Columbus is a city”中找到位置实体的任务转化为生成文本序列“@@ Columbus ## is a city”,其中特殊标记 @@## 标记了要提取的实体。为了有效地解决 LLMs 的“幻觉”问题,即 LLMs 很容易将 NULL 输入过度自信地标记为实体,我们提出了自验证策略,让 LLMs 自问提取的实体是否属于标记的实体标签。我们在五个广泛应用的 NER 数据集上进行了实验,GPT-NER 的性能可与完全监督的基线相媲美,这是我们所知道的第一次。更重要的是,我们发现,在低资源和少样本设置下,即当训练数据的数量极为有限时,GPT-NER 的能力明显更强,比监督模型表现得更好。这展示了 GPT-NER 在实际的 NER 应用中具有的特殊优势,尤其是当标记的示例数量有限时。