Masked image modeling (MIM), an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers (ViTs). Its underlying idea is simple: a portion of the input image is randomly masked out and then reconstructed via the pre-text task. However, the working principle behind MIM is not well explained, and previous studies insist that MIM primarily works for the Transformer family but is incompatible with CNNs. In this paper, we first study interactions among patches to understand what knowledge is learned and how it is acquired via the MIM task. We observe that MIM essentially teaches the model to learn better middle-order interactions among patches and extract more generalized features. Based on this fact, we propose an Architecture-Agnostic Masked Image Modeling framework (A$^2$MIM), which is compatible with both Transformers and CNNs in a unified way. Extensive experiments on popular benchmarks show that our A$^2$MIM learns better representations without explicit design and endows the backbone model with the stronger capability to transfer to various downstream tasks for both Transformers and CNNs.
翻译:蒙面图像建模(MIM)是新兴的自我监督的培训前先导方法,在与Vision变压器(View 变压器)合作的众多下游愿景任务中,显示了令人印象深刻的成功。其基本理念很简单:部分输入图像被随机遮盖出来,然后通过预文本任务进行重建。然而,MIM的工作原理没有很好地解释,先前的研究坚持认为MIM主要为变压器家族工作,但与CNN不相容。在本文中,我们首先研究补丁之间的互动,以了解通过MIM任务学到了什么知识以及如何获得知识。我们发现MIM基本上教授了模型,以学习更佳的补压器之间中序互动,并提取了更普遍的特征。基于这一事实,我们建议了一个建筑-Agnesticulate-Macificed图像建模框架(A$2$MIM),它以统一的方式与变压器和CNNIS兼容。关于流行基准的广泛实验表明,我们的A_2$MIM学会了更好的表现方式,但没有明确的设计和最终使主干模型更有能力向各种下游任务转移变压器。