Recent work has demonstrated that pre-trained language models (PLMs) are zero-shot learners. However, most existing zero-shot methods involve heavy human engineering or complicated self-training pipelines, hindering their application to new situations. In this work, we show that zero-shot text classification can be improved simply by clustering texts in the embedding spaces of PLMs. Specifically, we fit the unlabeled texts with a Bayesian Gaussian Mixture Model after initializing cluster positions and shapes using class names. Despite its simplicity, this approach achieves superior or comparable performance on both topic and sentiment classification datasets and outperforms prior works significantly on unbalanced datasets. We further explore the applicability of our clustering approach by evaluating it on 14 datasets with more diverse topics, text lengths, and numbers of classes. Our approach achieves an average of 20% absolute improvement over prompt-based zero-shot learning. Finally, we compare different PLM embedding spaces and find that texts are well-clustered by topics even if the PLM is not explicitly pre-trained to generate meaningful sentence embeddings. This work indicates that PLM embeddings can categorize texts without task-specific fine-tuning, thus providing a new way to analyze and utilize their knowledge and zero-shot learning ability.
翻译:最近的工作表明,经过培训的语文模式(PLM)是零点学习者,然而,大多数现有的零点方法涉及重型人文工程或复杂的自我培训管道,阻碍了对新情况的应用。在这项工作中,我们表明,只需在PLM的嵌入空间中将文本分组,就可以改进零点文本分类。具体地说,我们在使用类名初始化分组位置和形状后,将无标签文本与Bayesian Gaussian Mixture 模型匹配为Bayesian Gaussian Mixture。尽管这一方法很简单,但它在主题和情绪分类数据集方面都取得了优异性或可比较性业绩,并大大优于先前关于不平衡数据集的工作。我们进一步探索了我们集群方法的适用性,在14个数据集上对它进行了更多样化的专题、文本长度和班级数量进行评估。我们的方法比基于即时零点的零点学习平均提高了20%的绝对改进率。最后,我们比较了不同的PLM嵌入空间并发现,即使PLM没有经过明确的预先培训以产生有意义的嵌入内容嵌入式。我们的任务能力,因此可以进行升级。