Incorporating large-scale pre-trained models with the prototypical neural networks is a de-facto paradigm in few-shot named entity recognition. Existing methods, unfortunately, are not aware of the fact that embeddings from pre-trained models contain a prominently large amount of information regarding word frequencies, biasing prototypical neural networks against learning word entities. This discrepancy constrains the two models' synergy. Thus, we propose a one-line-code normalization method to reconcile such a mismatch with empirical and theoretical grounds. Our experiments based on nine benchmark datasets show the superiority of our method over the counterpart models and are comparable to the state-of-the-art methods. In addition to the model enhancement, our work also provides an analytical viewpoint for addressing the general problems in few-shot name entity recognition or other tasks that rely on pre-trained models or prototypical neural networks.
翻译:将大规模预先培训的模型与原型神经网络结合使用,是少数点名实体识别中的一种脱facto范式。遗憾的是,现有方法并不知道,从预培训模型中嵌入的文字频率信息非常多,偏向于学习文字实体的原型神经网络。这种差异限制了两个模型的协同作用。因此,我们建议采用一线代码正常化方法,将这种不匹配与经验和理论依据相调和。我们基于九个基准数据集的实验显示我们的方法优于对应模型,并且与最新方法相近。除了增强模型外,我们的工作还提供了一个分析观点,以少数点名实体识别或其他依赖预培训模型或原型神经网络的任务解决一般问题。