Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios. Recent works have focused on automatically searching discrete or continuous prompts or optimized verbalizers, yet studies for the demonstration are still limited. Concretely, the demonstration examples are crucial for an excellent final performance of prompt-tuning. In this paper, we propose a novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling. Furthermore, the proposed approach can be: (i) Plugged into any previous prompt-tuning approaches; (ii) Extended to widespread classification tasks with a large number of categories. Experimental results on 16 datasets illustrate that our method integrated with previous approaches LM-BFF and P-tuning can yield better performance. Code is available in https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning.
翻译:最近的工作重点是自动搜索离散或连续的提示或优化的口头分析,但示范研究仍然有限。具体地说,示范实例对于极佳的最后快速调试性能至关重要。在本文中,我们提议一种新型的插头、可扩展和有效的方法,称为对比性示范调试,不需进行示范抽样。此外,提议的方法可以:(一) 插进任何先前的快速调试方法;(二) 推广到大量类别的广泛分类任务。16个数据集的实验结果表明,我们与以前的方法LM-BFF和P调控相结合的方法可以产生更好的性能。代码可在https://github.com/zjunlp/PromptKG/tree/main/research/Demo-Tuning查阅。代码可在https://github.com/zjunp/PromptKG/tree/main/research/Demo-Tuning查阅。