Contrastive representation learning has been recently proved to be very efficient for self-supervised training. These methods have been successfully used to train encoders which perform comparably to supervised training on downstream classification tasks. A few works have started to build a theoretical framework around contrastive learning in which guarantees for its performance can be proven. We provide extensions of these results to training with multiple negative samples and for multiway classification. Furthermore, we provide convergence guarantees for the minimization of the contrastive training error with gradient descent of an overparametrized deep neural encoder, and provide some numerical experiments that complement our theoretical findings
翻译:最近证明,在自我监督的培训中,反对立代表性学习最近被证明非常高效。这些方法被成功地用于培训与下游分类任务监督培训相当的编码员。一些工作已开始围绕对比学习建立理论框架,以证明能保证其业绩的对比学习。我们将这些结果扩大到以多个负面样本和多路分类的培训。此外,我们还提供趋同保证,以尽量减少与过度平衡的深神经编码器的梯度下降相比的对比培训错误,并提供一些数字实验,以补充我们的理论结论。