For named entity recognition (NER) in zero-resource languages, utilizing knowledge distillation methods to transfer language-independent knowledge from the rich-resource source languages to zero-resource languages is an effective means. Typically, these approaches adopt a teacher-student architecture, where the teacher network is trained in the source language, and the student network seeks to learn knowledge from the teacher network and is expected to perform well in the target language. Despite the impressive performance achieved by these methods, we argue that they have two limitations. Firstly, the teacher network fails to effectively learn language-independent knowledge shared across languages due to the differences in the feature distribution between the source and target languages. Secondly, the student network acquires all of its knowledge from the teacher network and ignores the learning of target language-specific knowledge. Undesirably, these limitations would hinder the model's performance in the target language. This paper proposes an unsupervised prototype knowledge distillation network (ProKD) to address these issues. Specifically, ProKD presents a contrastive learning-based prototype alignment method to achieve class feature alignment by adjusting the distance among prototypes in the source and target languages, boosting the teacher network's capacity to acquire language-independent knowledge. In addition, ProKD introduces a prototypical self-training method to learn the intrinsic structure of the language by retraining the student network on the target data using samples' distance information from prototypes, thereby enhancing the student network's ability to acquire language-specific knowledge. Extensive experiments on three benchmark cross-lingual NER datasets demonstrate the effectiveness of our approach.
翻译:为了在零资源语言方面获得命名实体识别(NER),使用知识蒸馏方法将语言独立的知识从富资源源语言向零资源语言转移是一种有效手段。通常,这些方法采用师生结构,教师网络在这种结构中接受源语言培训,学生网络寻求从教师网络学习知识,并有望在目标语言方面表现良好。尽管这些方法取得了令人印象深刻的业绩,但我们认为,它们有两个局限性。首先,教师网络未能有效地学习语言独立的知识,将语言独立的知识从富资源源语言传播到零资源语言语言。第二,学生网络从教师网络中获取其全部知识,忽视了目标语言知识的学习。这些局限性显然会妨碍该模式在目标语言网络上的表现,并有望在目标语言上取得良好的原型知识。具体地说,ProKD展示了一种基于对比的学习原型标准,通过调整源语言原型和目标语言之间的距离来达到课堂特征调整。 学生网络通过提高基础语言知识的远程学习能力,从而将数据网络升级为基础语言学习。