Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto-generated prompts, it is also expensive and time-consuming to verify their effectiveness in non-few-shot scenarios. Hence, it is still challenging for prompt tuning to address many-class classification tasks. To this end, we propose prompt tuning with rules (PTR) for many-class text classification and apply logic rules to construct prompts with several sub-prompts. In this way, PTR is able to encode prior knowledge of each class into prompt tuning. We conduct experiments on relation classification, a typical and complicated many-class classification task, and the results show that PTR can significantly and consistently outperform existing state-of-the-art baselines. This indicates that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.
翻译:精细调整的预先培训语言模型(PLM)几乎在所有NLP任务上都取得了惊人的成绩。通过使用额外的提示来微调PLM,我们可以进一步激励在PLM中传播的丰富知识,更好地为下游任务服务。快速调整在一些小类分类任务(如情绪分类和自然语言推断)上取得了令人乐观的成果。然而,手工设计大量的语言提示是繁琐和可以理解的。对于这些自动生成的提示来说,在非发照情景中验证其有效性也是昂贵和费时的。因此,迅速调整以应对许多级分类任务仍然很困难。为此,我们提议迅速调整多级文本分类的规则(PTR),并运用逻辑规则来用几个子方案来构建提示。这样,PTR能够将每个类的先前知识编码为快速调整。我们进行了关系分类实验,这是典型和复杂的许多级分类任务,结果显示PTR可以显著和持续地超越现有的状态,从而使得PLM系统在先前的基线上都具有良好的优势。