Contrastive Learning has emerged as a powerful representation learning method and facilitates various downstream tasks especially when supervised data is limited. How to construct efficient contrastive samples through data augmentation is key to its success. Unlike vision tasks, the data augmentation method for contrastive learning has not been investigated sufficiently in language tasks. In this paper, we propose a novel approach to constructing contrastive samples for language tasks using text summarization. We use these samples for supervised contrastive learning to gain better text representations which greatly benefit text classification tasks with limited annotations. To further improve the method, we mix up samples from different classes and add an extra regularization, named mix-sum regularization, in addition to the cross-entropy-loss. Experiments on real-world text classification datasets (Amazon-5, Yelp-5, AG News) demonstrate the effectiveness of the proposed contrastive learning framework with summarization-based data augmentation and mix-sum regularization.
翻译:对比性学习已成为一种强大的代表性学习方法,有助于各种下游任务,特别是在受监督的数据有限的情况下。如何通过数据增强建立高效对比样本是成功的关键。与愿景任务不同,对比性学习的数据增强方法在语言任务中没有得到充分调查。在本文件中,我们提出一种新颖的方法,用文本归纳方法为语言任务构建对比性样本。我们用这些样本进行监督对比性学习,以获得更好的文本表述,这极大地有利于文本分类任务,但说明有限。为了进一步改进方法,我们将不同类别的样本混在一起,并加上一个额外的正规化,即混合总和正规化,再加上交叉的杂交损失。关于真实世界文本分类数据集(Amazon-5、Yelp-5、AG News)的实验显示了拟议对比性学习框架的有效性,其中以汇总为基础的数据增强和混合总和正规化为基础。