Providing pretrained language models with simple task descriptions in natural language enables them to solve some tasks in a fully unsupervised fashion. Moreover, when combined with regular learning from examples, this idea yields impressive few-shot results for a wide range of text classification tasks. It is also a promising direction to improve data efficiency in generative settings, but there are several challenges to using a combination of task descriptions and example-based learning for text generation. In particular, it is crucial to find task descriptions that are easy to understand for the pretrained model and to ensure that it actually makes good use of them; furthermore, effective measures against overfitting have to be implemented. In this paper, we show how these challenges can be tackled: We introduce GenPET, a method for text generation that is based on pattern-exploiting training, a recent approach for combining textual instructions with supervised learning that only works for classification tasks. On several summarization and headline generation datasets, GenPET gives consistent improvements over strong baselines in few-shot settings.
翻译:以自然语言以完全不受监督的方式提供经过预先培训的文字模型,使其能以完全不受监督的方式完成某些任务。此外,如果结合从实例中定期学习,这一想法为广泛的文本分类任务带来令人印象深刻的微小结果。它也是提高基因化环境中的数据效率的一个大有希望的方向,但在将任务描述和基于实例的学习结合起来用于生成文本方面,存在着若干挑战。特别是,找到对经过预先培训的模式来说容易理解的任务描述,并确保它确实很好地利用这些描述至关重要;此外,还必须执行防止过度配置的有效措施。在本文件中,我们展示了如何应对这些挑战:我们引入了GenPET,这是一种基于模式开发培训的文本生成方法,这是将文本指示与仅对分类任务起作用的监督性学习相结合的一种最新方法。在几个简略和标题生成数据集上,GenPET在少数情况下对强的基线作出了一致的改进。