Unsupervised learning is just at a tipping point where it could really take off. Among these approaches, contrastive learning has seen tremendous progress and led to state-of-the-art performance. In this paper, we construct a novel probabilistic graphical model that effectively incorporates the low rank promoting prior into the framework of contrastive learning, referred to as LORAC. In contrast to the existing conventional self-supervised approaches that only considers independent learning, our hypothesis explicitly requires that all the samples belonging to the same instance class lie on the same subspace with small dimension. This heuristic poses particular joint learning constraints to reduce the degree of freedom of the problem during the search of the optimal network parameterization. Most importantly, we argue that the low rank prior employed here is not unique, and many different priors can be invoked in a similar probabilistic way, corresponding to different hypotheses about underlying truth behind the contrastive features. Empirical evidences show that the proposed algorithm clearly surpasses the state-of-the-art approaches on multiple benchmarks, including image classification, object detection, instance segmentation and keypoint detection.
翻译:无监督的学习正处在一个临界点,它可以真正起飞。在这些方法中,对比式的学习已经取得了巨大的进步,并导致了最先进的表现。在本文中,我们建立了一个新型的概率化图形模型,有效地将先前的低级别提升纳入对比式学习框架(称为LORAC ) 。与仅考虑独立学习的常规自我监督方法相比,我们的假设明确要求属于同一类别的所有样本都位于同一子空间上,具有小维度。这种超常化带来了特殊的联合学习限制,降低了在搜索最佳网络参数化过程中问题的自由程度。最重要的是,我们争论说,以前在这里使用的低级别并不是独一无二的,许多不同的前位也可以以类似的概率被援引,这与对比性特征背后对基本真理的不同假设相对应。经验性证据表明,拟议的算法显然超过了在多个基准方面的最新方法,包括图像分类、对象检测、实例分割和关键点检测。