Recently, Masked Image Modeling (MIM) achieves great success in self-supervised visual recognition. However, as a reconstruction-based framework, it is still an open question to understand how MIM works, since MIM appears very different from previous well-studied siamese approaches such as contrastive learning. In this paper, we propose a new viewpoint: MIM implicitly learns occlusion-invariant features, which is analogous to other siamese methods while the latter learns other invariance. By relaxing MIM formulation into an equivalent siamese form, MIM methods can be interpreted in a unified framework with conventional methods, among which only a) data transformations, i.e. what invariance to learn, and b) similarity measurements are different. Furthermore, taking MAE (He et al.) as a representative example of MIM, we empirically find the success of MIM models relates a little to the choice of similarity functions, but the learned occlusion invariant feature introduced by masked image -- it turns out to be a favored initialization for vision transformers, even though the learned feature could be less semantic. We hope our findings could inspire researchers to develop more powerful self-supervised methods in computer vision community.
翻译:最近,蒙面图像建模(MIM)在自我监督的视觉识别方面取得了巨大成功。然而,作为一个基于重建的框架,了解MIM如何运作仍然是一个有待解决的问题,因为MIM看起来与以往研究周密的类比方法(如对比学习)有很大不同。在本文中,我们提出了一个新观点:MIM隐含地学习了隐性排斥和异性特征,这与其他有色人种方法相似,而后者则学习了其他差异。通过将MIM配方放松为等同的Siamese形式,MIM方法可以在一个具有传统方法的统一框架内解释,其中只有一种数据转换,即学习的不易性,和(b)相似的测量方法不同。此外,用MAE(He等人)作为MIM的一个有代表性的例子,我们从经验中发现MIM模型的成功与类似功能的选择没有什么关系,但通过蒙面图像引入的已知异性特性,MIM方法可以转变为更有利于对愿景转换者的初始化,即使我们所学到的功能更强的自我感官能发展到更强的图像。