As an effective approach to tune pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using \textit{cloze}-style language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in fully supervised, few-shot and zero-shot scenarios. We first develop a simple and effective prompt-learning pipeline by constructing entity-oriented verbalizers and templates and conducting masked language modeling. Further, to tackle the zero-shot regime, we propose a self-supervised strategy that carries out distribution-level optimization in prompt-learning to automatically summarize the information of entity types. Extensive experiments on three fine-grained entity typing benchmarks (with up to 86 classes) under fully supervised, few-shot and zero-shot settings show that prompt-learning methods significantly outperform fine-tuning baselines, especially when the training data is insufficient.
翻译:作为调整特定任务预先培训语言模式的有效方法,迅速学习最近引起了研究人员的极大关注。通过使用“textit{cloze}system”语言激励激励PLM的多才多艺知识,迅速学习可以在一系列国家学习计划任务(如自然语言推论、情绪分类和知识测试)上取得令人乐观的成果。在这项工作中,我们调查了在完全监督、少发和零发情景下对微微重实体打字进行快速学习的应用。我们首先通过建立面向实体的口头和模板,并进行隐蔽语言模型,开发了一个简单而有效的快速学习管道。此外,为了解决“零弹式”制度,我们提出了一个自我监督战略,在迅速学习中实现分配一级的优化,自动总结实体类型信息。在充分监督、少发和零发环境中对三个微重实体打字基准(最多86个级)进行的广泛试验表明,迅速学习的方法大大超过精确调整基线,特别是在培训数据不足的情况下。