We propose a self-supervised learning method for long text documents based on contrastive learning. A key to our method is Shuffle and Divide (SaD), a simple text augmentation algorithm that sets up a pretext task required for contrastive updates to BERT-based document embedding. SaD splits a document into two sub-documents containing randomly shuffled words in the entire documents. The sub-documents are considered positive examples, leaving all other documents in the corpus as negatives. After SaD, we repeat the contrastive update and clustering phases until convergence. It is naturally a time-consuming, cumbersome task to label text documents, and our method can help alleviate human efforts, which are most expensive resources in AI. We have empirically evaluated our method by performing unsupervised text classification on the 20 Newsgroups, Reuters-21578, BBC, and BBCSport datasets. In particular, our method pushes the current state-of-the-art, SS-SB-MT, on 20 Newsgroups by 20.94% in accuracy. We also achieve the state-of-the-art performance on Reuters-21578 and exceptionally-high accuracy performances (over 95%) for unsupervised classification on the BBC and BBCSport datasets.
翻译:我们提出了一种基于对比学习的长文本文档自监督学习方法。我们方法的关键是轮换与分割(SaD),一种简单的文本数据增强算法,可以为基于BERT的文档嵌入设置需要对比更新的预设任务。SaD将文档分成两个包含整个文档中随机洗牌的单词的子文档。将这些子文档视为正例,将所有其他文档视为负例。完成SaD后,我们重复执行对比更新和聚类阶段,直到收敛。将文本文档进行标记通常是一项费时而麻烦的任务,而我们的方法可以帮助减轻人类工作量,这是AI中最昂贵的资源。我们通过对20 Newsgroups、Reuters-21578、BBC和BBCSport数据集进行无监督文本分类的实证评估了我们的方法。特别的,在20 Newsgroups上,我们的方法将当前最先进的方法SS-SB-MT的精度提高了20.94%。我们在Reuters-21578上也实现了最先进的性能,并在BBC和BBCSport数据集上实现了超过95%的极高精度的无监督分类表现。