Contrastive learning (CL) has shown great power in self-supervised learning due to its ability to capture insight correlations among large-scale data. Current CL models are biased to learn only the ability to discriminate positive and negative pairs due to the discriminative task setting. However, this bias would lead to ignoring its sufficiency for other downstream tasks, which we call the discriminative information overfitting problem. In this paper, we propose to tackle the above problems from the aspect of the Information Bottleneck (IB) principle, further pushing forward the frontier of CL. Specifically, we present a new perspective that CL is an instantiation of the IB principle, including information compression and expression. We theoretically analyze the optimal information situation and demonstrate that minimum sufficient augmentation and information-generalized representation are the optimal requirements for achieving maximum compression and generalizability to downstream tasks. Therefore, we propose the Masked Reconstruction Contrastive Learning~(MRCL) model to improve CL models. For implementation in practice, MRCL utilizes the masking operation for stronger augmentation, further eliminating redundant and noisy information. In order to alleviate the discriminative information overfitting problem effectively, we employ the reconstruction task to regularize the discriminative task. We conduct comprehensive experiments and show the superiority of the proposed model on multiple tasks, including image classification, semantic segmentation and objective detection.
翻译:现有CL模式偏向于只学习歧视正对和负对等的能力,但这种偏向会导致忽视它是否足以完成其他下游任务,我们称之为歧视信息过度问题。在本文件中,我们提议从信息瓶颈原则的方面解决上述问题,进一步推进CL的前沿。具体地说,我们提出了一个新观点,即CL是IB原则的即时化,包括信息压缩和表达。我们理论上分析最佳信息状况,并表明最起码的充足增强和信息通用代表性是最大限度地压缩和概括下游任务的最佳要求。因此,我们提议采用有节制的重整抗力学习~(MRMCL)模式来改进CL模式。在实践中,MLLL利用遮掩操作来加强增强、进一步消除冗余和紧张的信息。为了减轻歧视性信息,包括歧视性的测试,我们运用了对多级结构的分类,我们提出了重塑了重塑性任务,我们提出了重塑了重塑的模型。