Recent advances in large pre-trained language models (PLMs) lead to impressive gains in natural language understanding (NLU) tasks with task-specific fine-tuning. However, directly fine-tuning PLMs heavily relies on sufficient labeled training instances, which are usually hard to obtain. Prompt-based tuning on PLMs has shown to be powerful for various downstream few-shot tasks. Existing works studying prompt-based tuning for few-shot NLU tasks mainly focus on deriving proper label words with a verbalizer or generating prompt templates to elicit semantics from PLMs. In addition, conventional data augmentation strategies such as synonym substitution, though widely adopted in low-resource scenarios, only bring marginal improvements for prompt-based few-shot learning. Thus, an important research question arises: how to design effective data augmentation methods for prompt-based few-shot tuning? To this end, considering the label semantics are essential in prompt-based tuning, we propose a novel label-guided data augmentation framework PromptDA, which exploits the enriched label semantic information for data augmentation. Extensive experiment results on few-shot text classification tasks demonstrate the superior performance of the proposed framework by effectively leveraging label semantics and data augmentation for natural language understanding. Our code is available at https://github.com/canyuchen/PromptDA.
翻译:最近,大型预训练语言模型(PLMs)的先进发展使得任务特定的微调在自然语言理解(NLU)任务中取得了令人瞩目的进展。然而,直接在PLMs上微调严重依赖于足够的标记训练实例,这通常很难获得。基于提示的PLMs微调已经显示出对各种下游少样本任务的强大影响力。研究基于提示的微调以进行少样本NLU任务的现有工作主要集中在使用verbalizer导出适当的标签单词或生成提示模板以从PLMs中诱导语义。此外,虽然在低资源场景中广泛采用的常规数据增强策略(例如同义词替换)可以带来较小的改进,但只具有有限的效果。因此,一个重要的研究问题出现了:如何为基于提示的少样本微调设计有效的数据增强方法?为此,考虑到标签语义在基于提示的调整中至关重要,我们提出了一种新颖的标签引导的数据增强框架PromptDA,该框架利用丰富的标签语义信息用于数据增强。在少样本文本分类任务上的广泛实验结果表明,所提出的框架通过有效地利用标签语义和数据增强来实现自然语言理解的卓越性能。我们的代码可在https://github.com/canyuchen/PromptDA找到。