Vision Transformer (ViT) has shown its advantages over the convolutional neural network (CNN) with its ability to capture global long-range dependencies for visual representation learning. Besides ViT, contrastive learning is another popular research topic recently. While previous contrastive learning works are mostly based on CNNs, some latest studies have attempted to jointly model the ViT and the contrastive learning for enhanced self-supervised learning. Despite the considerable progress, these combinations of ViT and contrastive learning mostly focus on the instance-level contrastiveness, which often overlook the contrastiveness of the global clustering structures and also lack the ability to directly learn the clustering result (e.g., for images). In view of this, this paper presents an end-to-end deep image clustering approach termed Vision Transformer for Contrastive Clustering (VTCC), which for the first time, to the best of our knowledge, unifies the Transformer and the contrastive learning for the image clustering task. Specifically, with two random augmentations performed on each image in a mini-batch, we utilize a ViT encoder with two weight-sharing views as the backbone to learn the representations for the augmented samples. To remedy the potential instability of the ViT, we incorporate a convolutional stem, which uses multiple stacked small convolutions instead of a big convolution in the patch projection layer, to split each augmented sample into a sequence of patches. With representations learned via the backbone, an instance projector and a cluster projector are further utilized for the instance-level contrastive learning and the global clustering structure learning, respectively. Extensive experiments on eight image datasets demonstrate the stability (during the training-from-scratch) and the superiority (in clustering performance) of VTCC over the state-of-the-art.
翻译:视觉变异器( ViT) 展示了它相对于革命性神经网络( CNN) 的优势。 除了 ViT 外, 对比性学习是最近另一个受欢迎的研究课题。 虽然先前的对比性学习作品大多以CNN为基础, 但最近的一些研究试图联合模拟 ViT 和对比性学习,以加强自我监督的学习。 尽管取得了相当大的进步, ViT 和对比性学习的这些组合主要侧重于实例级的对比性, 这往往忽视了全球集群结构的对比性, 并且缺乏直接了解组合结果( 例如, 图像) 的能力。 此外, 除 ViT 外, 对比性学习是另一个流行性的研究课题。 本文展示了一个端至端至端的图像组合组合方法, 将变异器的变异器和对比性学习用于图像组合任务。 具体地, 每张相的图像显示两次随机增长, 我们使用 ViT 的解变异性演算器直接了解结果, 以两个重的平面图示 。