Contrastive learning, which aims at minimizing the distance between positive pairs while maximizing that of negative ones, has been widely and successfully applied in unsupervised feature learning, where the design of positive and negative (pos/neg) pairs is one of its keys. In this paper, we attempt to devise a feature-level data manipulation, differing from data augmentation, to enhance the generic contrastive self-supervised learning. To this end, we first design a visualization scheme for pos/neg score (Pos/neg score indicates cosine similarity of pos/neg pair.) distribution, which enables us to analyze, interpret and understand the learning process. To our knowledge, this is the first attempt of its kind. More importantly, leveraging this tool, we gain some significant observations, which inspire our novel Feature Transformation proposals including the extrapolation of positives. This operation creates harder positives to boost the learning because hard positives enable the model to be more view-invariant. Besides, we propose the interpolation among negatives, which provides diversified negatives and makes the model more discriminative. It is the first attempt to deal with both challenges simultaneously. Experiment results show that our proposed Feature Transformation can improve at least 6.0% accuracy on ImageNet-100 over MoCo baseline, and about 2.0% accuracy on ImageNet-1K over the MoCoV2 baseline. Transferring to the downstream tasks successfully demonstrate our model is less task-bias. Visualization tools and codes https://github.com/DTennant/CL-Visualizing-Feature-Transformation .
翻译:对比性学习旨在尽可能缩小正对对正对之间的距离,同时尽量扩大负对对对的距离,在未受监督的特征学习中广泛和成功地应用了这种学习,其中正对和负对(pos/neg)配对的设计是其关键之一。在本文中,我们试图设计一个与数据增强不同的特征级数据操纵,以加强通用的对比性自我监督学习。为此,我们首先设计了一个可视化的 Pos/neg 评分方案(Pos/neg 评分显示 pos/neg 线性) 分布,这使我们能够分析、解释和理解学习过程。对于我们的知识来说,这是首个尝试。更重要的是,利用这个工具,我们得到了一些重要的观察,这激发了我们新的变异性变异建议,包括正值外推。这个操作更难提高学习的积极性,因为硬性使模型更容易观察。此外,我们建议对负值进行内推,这提供了多样化的负值,并使模型更具有分析性的模型。这是我们第一次尝试在IMO- 基准上改进我们的变现结果。