Masked Image Modeling (MIM) is a new self-supervised vision pre-training paradigm using Vision Transformer (ViT). Previous works can be pixel-based or token-based, using original pixels or discrete visual tokens from parametric tokenizer models, respectively. Our proposed approach, \textbf{CCViT}, leverages k-means clustering to obtain centroids for image modeling without supervised training of tokenizer model. The centroids represent patch pixels and index tokens and have the property of local invariance. Non-parametric centroid tokenizer only takes seconds to create and is faster for token inference. Specifically, we adopt patch masking and centroid replacement strategies to construct corrupted inputs, and two stacked encoder blocks to predict corrupted patch tokens and reconstruct original patch pixels. Experiments show that the ViT-B model with only 300 epochs achieves 84.3\% top-1 accuracy on ImageNet-1K classification and 51.6\% on ADE20K semantic segmentation. Our approach achieves competitive results with BEiTv2 without distillation training from other models and outperforms other methods such as MAE.
翻译:遮蔽图像建模( MIM) 是使用 Vision 变异器( VIT) 进行自我监督的培养前新模式。 先前的工程可以分别使用像素基或象征性的像素基或像素基或像素基, 分别使用像素基或像像素基质模型的原始像素类或离散的视觉标志。 我们的拟议方法,\ textbf{CCViT}, 利用 k- means 集成法来获取用于图像建模的机器人, 无需对代谢器模型进行监管培训。 中央机器人代表了补丁像素和索引符号, 并具有本地变异特性。 非参数化的中间符号代号仅需要几秒钟来创建, 并且更快捷。 具体地说, 我们采用补丁代号遮挡器和中两个堆叠的编码方块来预测损坏的补丁符号, 并重建原始的补丁像素类。 实验显示, 仅有300个粒子的VIT-B模型在图像Net-1 分类中实现了84.3 顶级的精确度值, 和51.6 和 ADE20K semanper 分解法。 我们的方法通过 BE2 的模型获得竞争结果, 。</s>