Natural language prompts have been shown to facilitate cross-task generalization for large language models. However, with no or limited labeled examples, the cross-task performance is highly sensitive to the choice of prompts, while selecting a high-performing prompt is challenging given the scarcity of labels. To address the issue, we propose a Zero-Label Prompt Selection (ZPS) method that selects prompts without any labeled data or gradient update. Specifically, given the candidate human-written prompts for a task, ZPS labels a set of unlabeled data with a prompt ensemble and uses the pseudo-labels for prompt selection. Experiments show that ZPS improves over prior methods by a sizeable margin in zero-label performance. We also extend ZPS to a few-shot setting and show its advantages over strong baselines such as prompt tuning and model tuning.
翻译:自然语言提示已经显示,可以促进大型语言模型的跨任务概括化;然而,没有或只有有限的标签实例,交叉任务性能对于选择提示性能非常敏感,而选择高性能的提示性能则由于标签稀缺而具有挑战性。为了解决这个问题,我们建议采用零标签快速选择方法,在没有标签数据或梯度更新的情况下选择提示性能。具体地说,考虑到候选的人类写作性能,ZPS将一组未贴标签的数据贴上快速合用词,并使用假标签进行快速选择。实验显示,ZPS比先前的方法改进了,零标签性能比以前大得多。我们还将ZPS扩大到几个点,并显示它比强的基线(如快速调试和模型调)的优势。