Cluster discrimination is an effective pretext task for unsupervised representation learning, which often consists of two phases: clustering and discrimination. Clustering is to assign each instance a pseudo label that will be used to learn representations in discrimination. The main challenge resides in clustering since many prevalent clustering methods (e.g., k-means) have to run in a batch mode that goes multiple iterations over the whole data. Recently, a balanced online clustering method, i.e., SwAV, is proposed for representation learning. However, the assignment is optimized within only a small subset of data, which can be suboptimal. To address these challenges, we first investigate the objective of clustering-based representation learning from the perspective of distance metric learning. Based on this, we propose a novel clustering-based pretext task with online \textbf{Co}nstrained \textbf{K}-m\textbf{e}ans (\textbf{CoKe}) to learn representations and relations between instances simultaneously. Compared with the balanced clustering that each cluster has exactly the same size, we only constrain the minimum size of clusters to flexibly capture the inherent data structure. More importantly, our online assignment method has a theoretical guarantee to approach the global optimum. Finally, two variance reduction strategies are proposed to make the clustering robust for different augmentations. Without keeping representations of instances, the data is accessed in an online mode in CoKe while a single view of instances at each iteration is sufficient to demonstrate a better performance than contrastive learning methods relying on two views. Extensive experiments on ImageNet verify the efficacy of our proposal. Code will be released.
翻译:群集歧视是未经监督的代表学习的有效托辞任务,通常由两个阶段组成:群集和歧视。群集是指为每个案例指定一个假标签,用于在歧视中学习代表。主要挑战在于群集,因为许多流行的群集方法(例如, k- means) 必须以批量方式运行,在整个数据中反复反复出现。最近,为代表学习提议一种平衡的在线群集方法,即 Swavan,但分配只能优化在一小部分数据中,而这些数据可能是不优化的。为了应对这些挑战,我们首先从远程计量学习的角度来调查基于群集的代表学习的目标。在此基础上,我们提出一个基于群集的新颖的托辞任务,在网上群集方法(例如,k-bf{K}-mtextbf{es}{es} e} (\ textbf{{{Cokey} ) 中,要以平衡的方式学习各种实例。对比每个组群集的均衡组合,我们只能从最小的量角度从远程学习。最后,要保证每个团群集的最小的群集,在最稳性分析方法到最稳性的数据缩的递化的缩的缩化的缩化,最后, 是要保存两个阵列的递增缩式的缩式, 。在不易缩缩缩缩式的缩式的计算方法,在不缩缩缩缩缩缩缩缩缩缩式的计算。