Contrastive learning demonstrates great promise for representation learning. Data augmentations play a critical role in contrastive learning by providing informative views of the data without needing the labels. However, the performance of the existing works heavily relies on the quality of the employed data augmentation (DA) functions, which are typically hand picked from a restricted set of choices. While exploiting a diverse set of data augmentations is appealing, the intricacies of DAs and representation learning may lead to performance degradation. To address this challenge and allow for a systemic use of large numbers of data augmentations, this paper proposes Contrastive Learning with Consistent Representations (CoCor). At the core of CoCor is a new consistency measure, DA consistency, which dictates the mapping of augmented input data to the representation space such that these instances are mapped to optimal locations in a way consistent to the intensity of the DA applied. Furthermore, a data-driven approach is proposed to learn the optimal mapping locations as a function of DA while maintaining a desired monotonic property with respect to DA intensity. The proposed techniques give rise to a semi-supervised learning framework based on bi-level optimization, achieving new state-of-the-art results for image recognition.
翻译:数据增强和代表性学习显示了代表性学习的巨大前景。数据增强在对比性学习中发挥着关键作用,它提供了数据信息,而不需要标签。然而,现有工作的绩效在很大程度上依赖于所使用数据增强功能的质量,这些功能通常是从一组有限的选择中手工提取的。虽然利用一套不同的数据增强功能很吸引人,但数据增强和代表性学习的复杂性可能导致性能退化。为了应对这一挑战,并允许系统地使用大量的数据增强,本文件提议与一致的表述(CoCor)进行对比学习。在CoCor的核心是一个新的一致性衡量标准,即DA一致性,它要求将扩大的投入数据绘制到代表空间,以便将这些实例绘制到与应用的DA强度相一致的最佳位置。此外,还提议采用数据驱动方法,学习作为DA的一个功能的最佳绘图位置,同时保持与DA强度相关的理想的单一属性。拟议技术在双级优化的基础上形成一个半超强的学习框架,即DA一致性,实现新的州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-州-