Large language models (LLMs) have been leveraged for several years now, obtaining state-of-the-art performance in recognizing entities from modern documents. For the last few months, the conversational agent ChatGPT has "prompted" a lot of interest in the scientific community and public due to its capacity of generating plausible-sounding answers. In this paper, we explore this ability by probing it in the named entity recognition and classification (NERC) task in primary sources (e.g., historical newspapers and classical commentaries) in a zero-shot manner and by comparing it with state-of-the-art LM-based systems. Our findings indicate several shortcomings in identifying entities in historical text that range from the consistency of entity annotation guidelines, entity complexity, and code-switching, to the specificity of prompting. Moreover, as expected, the inaccessibility of historical archives to the public (and thus on the Internet) also impacts its performance.
翻译:大型语言模型(LLMs)已经被利用多年,在识别现代文档中的实体方面实现了最先进的性能。在过去的几个月中,交互式代理ChatGPT由于其生成合理答案的能力,在科学界和公众中引起了很大的兴趣。在本文中,我们探讨了它在没有任何训练的情况下,如何在主要来源(例如历史报纸和古典注释)中进行命名实体识别和分类(NERC)任务,并与最先进的基于语言模型的系统进行比较。我们的发现表明,在历史文本中识别实体面临着一些问题,涉及到实体注释准则的一致性、实体复杂性、代码切换和提示的特定性。此外,由于历史档案对公众(因此也对互联网)不可访问,这也影响了其性能。