Large-scale pretrained language models have led to dramatic improvements in text generation. Impressive performance can be achieved by finetuning only on a small number of instances (few-shot setting). Nonetheless, almost all previous work simply applies random sampling to select the few-shot training instances. Little to no attention has been paid to the selection strategies and how they would affect model performance. In this work, we present a study on training instance selection in few-shot neural text generation. The selection decision is made based only on the unlabeled data so as to identify the most worthwhile data points that should be annotated under some budget of labeling cost. Based on the intuition that the few-shot training instances should be diverse and representative of the entire data distribution, we propose a simple selection strategy with K-means clustering. We show that even with the naive clustering-based approach, the generation models consistently outperform random sampling on three text generation tasks: data-to-text generation, document summarization and question generation. We hope that this work will call for more attention on this largely unexplored area.
翻译:大规模预先培训的语言模型导致文本生成的显著改进。 只能通过微调少数实例( 低镜头设置) 才能取得令人印象深刻的成绩。 尽管如此, 几乎所有先前的工作都仅仅应用随机抽样来选择少发培训实例。 几乎没有人注意选择战略, 也没有人注意它们如何影响模型的性能。 在这项工作中, 我们提交了一份关于微小的神经文本生成中培训实例选择的研究。 选择决定仅以未加标签的数据为基础, 以便确定在标签成本的某些预算下应加注的最有价值的数据点。 基于少数短发培训实例应该多样化并代表整个数据分布的直觉, 我们提出一个使用K means集群的简单选择战略。 我们表明,即使采用天真的集群方法, 生成模型仍然在三种文本生成任务( 数据到文字的生成、 文件的汇总和问题生成) 上优于随机抽样。 我们希望, 这项工作会要求更多关注这个大部分尚未勘探的领域。