Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream task. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto-generated prompts, it is also expensive and time-consuming to verify their effectiveness in non-few-shot scenarios. Hence, it is challenging for prompt tuning to address many-class classification tasks. To this end, we propose prompt tuning with rules (PTR) for many-class text classification, and apply logic rules to construct prompts with several sub-prompts. In this way, PTR is able to encode prior knowledge of each class into prompt tuning. We conduct experiments on relation classification, a typical many-class classification task, and the results on benchmarks show that PTR can significantly and consistently outperform existing state-of-the-art baselines. This indicates that PTR is a promising approach to take advantage of PLMs for those complicated classification tasks.
翻译:精细调整的预先培训语言模型(PLM)几乎在几乎所有NLP任务上都取得了惊人的成绩。通过使用额外的提示来微调PLM,我们可以进一步刺激在PLM中传播的丰富知识,以便更好地为下游任务服务。快速调整在一些小类分类任务(如情绪分类和自然语言推断)上取得了有希望的成果。然而,手工设计大量语言提示是繁琐和可以理解的。对于这些自动生成的提示来说,在非发照情景中验证其有效性也是昂贵和耗时的。因此,迅速调整以应对许多类分类任务是困难的。为此,我们建议迅速调整多类文本分类的规则(PTR),并应用逻辑规则来构建一些子方案下的提示。这样,PTR能够将每个类的先前知识编码为快速调试。我们进行了关系分类实验,这是典型的多级分类任务,而基准的结果表明,PTRTR可以显著和持续地超越现有的状态分类任务。这显示,PTRMM的分类是这些复杂任务中的一种前景。