We propose bootstrapped masked autoencoders (BootMAE), a new approach for vision BERT pretraining. BootMAE improves the original masked autoencoders (MAE) with two core designs: 1) momentum encoder that provides online feature as extra BERT prediction targets; 2) target-aware decoder that tries to reduce the pressure on the encoder to memorize target-specific information in BERT pretraining. The first design is motivated by the observation that using a pretrained MAE to extract the features as the BERT prediction target for masked tokens can achieve better pretraining performance. Therefore, we add a momentum encoder in parallel with the original MAE encoder, which bootstraps the pretraining performance by using its own representation as the BERT prediction target. In the second design, we introduce target-specific information (e.g., pixel values of unmasked patches) from the encoder directly to the decoder to reduce the pressure on the encoder of memorizing the target-specific information. Thus, the encoder focuses on semantic modeling, which is the goal of BERT pretraining, and does not need to waste its capacity in memorizing the information of unmasked tokens related to the prediction target. Through extensive experiments, our BootMAE achieves $84.2\%$ Top-1 accuracy on ImageNet-1K with ViT-B backbone, outperforming MAE by $+0.8\%$ under the same pre-training epochs. BootMAE also gets $+1.0$ mIoU improvements on semantic segmentation on ADE20K and $+1.3$ box AP, $+1.4$ mask AP improvement on object detection and segmentation on COCO dataset. Code is released at https://github.com/LightDXY/BootMAE.
翻译:我们提议用靴式蒙面自动编码器(BooteMAE)来模拟 BERT 预培训。 BootMAE 改进原始蒙面自动编码器(MAE),有两种核心设计:1) 动力编码器,作为BERT额外预测目标提供在线功能;2) 目标识别解码器,试图减少对编码器的压力,在BERT预培训中将目标特定信息进行记忆化。第一个设计是因为观察到使用预先训练的MAE来提取功能,因为BER对遮面牌的BER预测目标可以取得更好的预培训性能。因此,我们增加动力编码器与原始的MAE编码器编码器同步,它用其本身的表示方式作为BERT的预测目标目标;在第二个设计中,我们引入特定目标信息(例如,未装饰面面的芯片值),直接从OVient$$($O)到解码机的解码器, 也通过IMA(OD) IMA) 的变换数据,在IMA IMA 的变现中实现IME 目标变现,在IMD 目标的变现中,在IMA 数据中不需要的变现中,在IMA 目标的变现中,在IMA 相关的变换成为IMD 数据中需要了IMT 。