Most NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data. Existing dominant approaches usually suffer from the challenge that the target domain has different label sets compared with a resource-rich source domain, which can be concluded as class transfer and domain transfer. In this paper, we propose a lightweight tuning paradigm for low-resource NER via pluggable prompting (LightNER). Specifically, we construct the unified learnable verbalizer of entity categories to generate the entity span sequence and entity categories without any label-specific classifiers, thus addressing the class transfer issue. We further propose a pluggable guidance module by incorporating learnable parameters into the self-attention layer as guidance, which can re-modulate the attention and adapt pre-trained weights. Note that we only tune those inserted module with the whole parameter of the pre-trained language model fixed, thus, making our approach lightweight and flexible for low-resource scenarios and can better transfer knowledge across domains. Experimental results show that LightNER can obtain comparable performance in the standard supervised setting and outperform strong baselines in low-resource settings. Code is in https://github.com/zjunlp/DeepKE/tree/main/example/ner/few-shot.
翻译:大部分净入学率方法都依赖于用于示范培训的广泛标签数据,这些数据在低资源情景中挣扎,但培训数据有限。现有的主导方法通常面临挑战,即目标领域与资源丰富的源领域相比,有不同的标签组,这可以作为类转移和域转移来结论。在本文中,我们提议通过插插插式提示(LightNER)为低资源净入学率建立一个轻量调整模式。具体地说,我们构建了实体类别的统一可学习的言语,以生成实体序列和实体类别,而没有任何特定标签的分类,从而解决了阶级转移问题。我们进一步提议了一个可插入的指导模块,将可学习的参数纳入自留层作为指导,作为可重新调节注意力和调整预先培训的重量。我们注意到,我们仅将这些插入模块与经过预先培训的语言模式的整个参数调整,从而使我们的方法在低资源情景上变得轻巧和灵活,并且能够更好地在各领域转让知识。实验结果表明,轻NER公司可以在受监管的标准设置中取得可比的绩效,并在低资源/MINGIN/B/MINB/DRUS/DRUS/DRUS/DRUSDOLOS。