Recently, prompt-based learning has become a very popular solution in many Natural Language Processing (NLP) tasks by inserting a template into model input, which converts the task into a cloze-style one to smoothing out differences between the Pre-trained Language Model (PLM) and the current task. But in the case of relation classification, it is difficult to map the masked output to the relation labels because of its abundant semantic information, e.g. org:founded_by''. Therefore, a pre-trained model still needs enough labelled data to fit the relations. To mitigate this challenge, in this paper, we present a novel prompt-based learning method, namely LabelPrompt, for the relation classification task. It is an extraordinary intuitive approach by a motivation: ``GIVE MODEL CHOICES!''. First, we define some additional tokens to represent the relation labels, which regards these tokens as the verbalizer with semantic initialisation and constructs them with a prompt template method. Then we revisit the inconsistency of the predicted relation and the given entities, an entity-aware module with the thought of contrastive learning is designed to mitigate the problem. At last, we apply an attention query strategy to self-attention layers to resolve two types of tokens, prompt tokens and sequence tokens. The proposed strategy effectively improves the adaptation capability of prompt-based learning in the relation classification task when only a small labelled data is available. Extensive experimental results obtained on several bench-marking datasets demonstrate the superiority of the proposed LabelPrompt method, particularly in the few-shot scenario.
翻译:最近,快速学习已成为许多自然语言处理(NLP)任务中非常流行的解决方案,在模型输入中插入一个模板,将任务转换成凝胶式的模版,以缓解培训前语言模式(PLM)和当前任务之间的差异。但是,在关系分类方面,很难将隐含的输出映射到关系标签上,因为其含有丰富的语义信息,例如:org:suid_by'。因此,预先培训的模型仍然需要足够的标注数据以适应关系。为了减轻这一挑战,我们在本文件中为关系分类任务提出了一个新型的快速学习方法,即LabelPrompt, 将任务转换成凝胶式的模版。这是一个非凡的直观方法,其动机是: " QGive MODEL CHOICES!" 。首先,我们定义了代表关系标签的更多象征,这些象征只是用语义缩缩缩略图构建的缩略图方法。随后我们重新审视了预测的关系和给定的实体之间的不一致之处,特别是 "ProbelPromprompreal Streal " 战略中,我们设计了一种在模级上学习模式上学习了一种模定义的模型的模型的模型的模级策略。我们学习了两种实验中学习了一种模质变换式的模的模式的模型。