Cross-domain NER is a challenging task to address the low-resource problem in practical scenarios. Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain. Owing to the mismatch issue among entity types in different domains, previous approaches normally tune all parameters of PLMs, ending up with an entirely new NER model for each domain. Moreover, current models only focus on leveraging knowledge in one general source domain while failing to successfully transfer knowledge from multiple sources to the target. To address these issues, we introduce Collaborative Domain-Prefix Tuning for cross-domain NER (CP-NER) based on text-to-text generative PLMs. Specifically, we present text-to-text generation grounding domain-related instructors to transfer knowledge to new domain NER tasks without structural modifications. We utilize frozen PLMs and conduct collaborative domain-prefix tuning to stimulate the potential of PLMs to handle NER tasks across various domains. Experimental results on the Cross-NER benchmark show that the proposed approach has flexible transfer ability and performs better on both one-source and multiple-source cross-domain NER tasks. Codes will be available in https://github.com/zjunlp/DeepKE/tree/main/example/ner/cross.
翻译:跨领域实体识别是在实际场景中解决低资源问题的挑战性任务。先前的典型解决方案主要使用具备丰富资源的领域数据预训练语言模型(PLMs)生成一个NER模型,并将其调整到目标领域。由于不同领域中的实体类型不匹配,先前的方法通常调整PLMs的所有参数,这导致每个领域都需要一个全新的NER模型。此外,当前的模型只关注于利用一个通用的源领域的知识,而无法成功地将多个来源的知识传递到目标领域。为了解决这些问题,我们提出了一种基于文本生成PLMs的协作领域前缀调整方法(CP-NER)用于跨领域NER。具体来说,我们提出了基于文本生成的领域相关指令,以便在无结构变化的情况下将知识传递到新的领域NER任务中。我们利用冻结的PLMs并进行协作领域前缀调整,以激发PLMs处理各种领域NER任务的潜力。在Cross-NER基准测试中的实验结果表明,所提出的方法具有灵活的转移能力,并在单个源领域和多个源跨领域NER任务中表现更好。代码可在此处 https://github.com/zjunlp/DeepKE/tree/ma