Recently, self-supervised learning has attracted great attention since it only requires unlabeled data for training. Contrastive learning is a popular approach for self-supervised learning and empirically performs well in practice. However, the theoretical understanding of its generalization ability on downstream tasks is not well studied. To this end, we present a theoretical explanation of how contrastive self-supervised pre-trained models generalize to downstream tasks. Concretely, we quantitatively show that the self-supervised model has generalization ability on downstream classification tasks if it embeds input data into a feature space with distinguishing centers of classes and closely clustered intra-class samples. With the above conclusion, we further explore SimCLR and Barlow Twins, which are two canonical contrastive self-supervised methods. We prove that the aforementioned feature space can be obtained via any of the methods, and thus explain their success on the generalization on downstream classification tasks. Finally, various experiments are also conducted to verify our theoretical findings.
翻译:最近,自我监督的学习吸引了极大的关注,因为它只要求培训需要没有标签的数据。对比学习是自监督学习的流行方法,实际中经验表现良好。然而,对其在下游任务方面的一般化能力的理论理解没有很好地研究。为此,我们提出了一个理论解释,说明如何将对比性自我监督的预先培训模型一般化为下游任务。具体地说,我们从数量上表明,自我监督的模式如果将输入的数据嵌入一个特征空间,与不同类别和紧密分组的同类样本相连接,则对下游分类任务具有一般化能力。根据上述结论,我们进一步探索SimCLR和Barlow Twins,这是两种有讽刺意味的对比性自我监督方法。我们证明,上述特征空间可以通过任何一种方法获得,从而解释其在下游分类任务一般化方面的成功之处。最后,还进行了各种实验,以核实我们的理论结论。