Recently, self-supervised Masked Autoencoders (MAE) have attracted unprecedented attention for their impressive representation learning ability. However, the pretext task, Masked Image Modeling (MIM), reconstructs the missing local patches, lacking the global understanding of the image. This paper extends MAE to a fully-supervised setting by adding a supervised classification branch, thereby enabling MAE to effectively learn global features from golden labels. The proposed Supervised MAE (SupMAE) only exploits a visible subset of image patches for classification, unlike the standard supervised pre-training where all image patches are used. Through experiments, we demonstrate that not only is SupMAE more training efficient but also it learns more robust and transferable features. Specifically, SupMAE achieves comparable performance with MAE using only 30% of compute when evaluated on ImageNet with the ViT-B/16 model. SupMAE's robustness on ImageNet variants and transfer learning performance outperforms MAE and standard supervised pre-training counterparts. Code will be made publicly available.
翻译:最近,自我监督的蒙面自动编码器(MAE)因其令人印象深刻的学习能力而吸引了前所未有的关注。然而,其托辞任务,蒙面图像模型(MIM),重建缺失的本地补丁,缺乏全球对图像的了解。本文将MAE推广到一个完全监督的设置,增加了一个受监督的分类分支,从而使MAE能够有效地从黄金标签中学习全球特征。 拟议的受监督的MAE(SupMAE)仅利用一组可见的图像补丁进行分类,这与所有图像补丁都使用的受监督的训练前标准不同。通过实验,我们证明SupMAE不仅提高了培训效率,而且还学习了更强健和可转移的特征。具体地说,SupMAE在用Vit-B/16模型对图像网络进行评估时,仅使用30%的折算率实现了与MAE的类似性能。 SupMAE在图像网络变体上的坚固度以及传输学习性能优于MAE和受监督的培训前标准对应方。 守则将公开。