Masked image modeling (MIM) as pre-training is shown to be effective for numerous vision downstream tasks, but how and where MIM works remain unclear. In this paper, we compare MIM with the long-dominant supervised pre-trained models from two perspectives, the visualizations and the experiments, to uncover their key representational differences. From the visualizations, we find that MIM brings locality inductive bias to all layers of the trained models, but supervised models tend to focus locally at lower layers but more globally at higher layers. That may be the reason why MIM helps Vision Transformers that have a very large receptive field to optimize. Using MIM, the model can maintain a large diversity on attention heads in all layers. But for supervised models, the diversity on attention heads almost disappears from the last three layers and less diversity harms the fine-tuning performance. From the experiments, we find that MIM models can perform significantly better on geometric and motion tasks with weak semantics or fine-grained classification tasks, than their supervised counterparts. Without bells and whistles, a standard MIM pre-trained SwinV2-L could achieve state-of-the-art performance on pose estimation (78.9 AP on COCO test-dev and 78.0 AP on CrowdPose), depth estimation (0.287 RMSE on NYUv2 and 1.966 RMSE on KITTI), and video object tracking (70.7 SUC on LaSOT). For the semantic understanding datasets where the categories are sufficiently covered by the supervised pre-training, MIM models can still achieve highly competitive transfer performance. With a deeper understanding of MIM, we hope that our work can inspire new and solid research in this direction.
翻译:在本文中,我们从两个角度将MIM与长期主导且受监督的预培训模型进行比较,从视觉化和实验这两个角度来发现它们的关键表达性差异。从可视化中,我们发现MIM给经过培训的模型的所有层面带来地方感化偏差,但受监督模型往往以低层次为地方重点,但在全球更高层次则更多。这可能是为什么MIM帮助那些拥有非常大可接收场以优化的愿景变异器的原因。利用MIM,该模型可以在所有层次的注意力头上保持巨大的多样性。但对于受监督的模型而言,关注面的多样性几乎从最后三个层次消失,较少影响微调性能。从这些实验中,我们发现MIM模型可以让所有经过培训的模型在地理和运动前任务上表现得显著改善,而SUMIM系统或精细分级的分类任务则由它们监管的同行来完成。 没有Balls andwhucks, 一个标准的MIM前目标理解,SwinVLS-LSO的S-40 工作可以实现SIM的状态。