Prompt learning recently become an effective linguistic tool to motivate the PLMs' knowledge on few-shot-setting tasks. However, studies have shown the lack of robustness still exists in prompt learning, since suitable initialization of continuous prompt and expert-first manual prompt are essential in fine-tuning process. What is more, human also utilize their comparative ability to motivate their existing knowledge for distinguishing different examples. Motivated by this, we explore how to use contrastive samples to strengthen prompt learning. In detail, we first propose our model ConsPrompt combining with prompt encoding network, contrastive sampling module, and contrastive scoring module. Subsequently, two sampling strategies, similarity-based and label-based strategies, are introduced to realize differential contrastive learning. The effectiveness of proposed ConsPrompt is demonstrated in five different few-shot learning tasks and shown the similarity-based sampling strategy is more effective than label-based in combining contrastive learning. Our results also exhibits the state-of-the-art performance and robustness in different few-shot settings, which proves that the ConsPrompt could be assumed as a better knowledge probe to motivate PLMs.
翻译:迅速学习最近成为激励PLM公司了解微小任务的有效语言工具,然而,研究显示,在迅速学习方面仍然缺乏强健性,因为在微调过程中,必须适当启动连续迅速和专家第一手的手动快速,因为对微调过程至关重要。此外,人类还利用自己的比较能力来激发现有的知识,以区别不同的事例。我们为此探索如何利用对比样本加强迅速学习。我们首先提议模型ConsPrompt结合迅速编码网络、对比抽样模块和对比性评分模块。随后,引入了两个抽样战略、类似和标签制战略,以达到差异对比性学习。拟议的ConsPrompt的实效体现在五个不同的微小的学习任务中,并表明类似性抽样战略比基于标签的结合对比性学习更为有效。我们的结果还展示了在不同微小的环境下最先进的业绩和强性,这证明ConsPrompt可以被假定为激励PLMS的更好知识探测器。