Masked Autoencoder (MAE) has demonstrated superior performance on various vision tasks via randomly masking image patches and reconstruction. However, effective data augmentation strategies for MAE still remain open questions, different from those in contrastive learning that serve as the most important part. This paper studies the prevailing mixing augmentation for MAE. We first demonstrate that naive mixing will in contrast degenerate model performance due to the increase of mutual information (MI). To address, we propose homologous recognition, an auxiliary pretext task, not only to alleviate the MI increasement by explicitly requiring each patch to recognize homologous patches, but also to perform object-aware self-supervised pre-training for better downstream dense perception performance. With extensive experiments, we demonstrate that our proposed Mixed Autoencoder (MixedAE) achieves the state-of-the-art transfer results among masked image modeling (MIM) augmentations on different downstream tasks with significant efficiency. Specifically, our MixedAE outperforms MAE by +0.3% accuracy, +1.7 mIoU and +0.9 AP on ImageNet-1K, ADE20K and COCO respectively with a standard ViT-Base. Moreover, MixedAE surpasses iBOT, a strong MIM method combined with instance discrimination, while accelerating training by 2x. To our best knowledge, this is the very first work to consider mixing for MIM from the perspective of pretext task design. Code will be made available.
翻译:Masked自编码器(MAE)通过随机遮盖图像补丁和重建在各种视觉任务中展现了优越的性能。然而,与对比学习中的增强方法不同,MAE的有效数据增强策略仍然存在着问题。本文研究了MAE的通行混合增强方法。我们首先证明了纯粹的混合将会使模型性能降低,这是由于互信息(MI)的增加所致。为了解决这个问题,我们提出了同源识别这一辅助预训练任务,通过显式地要求每个补丁识别同事物的补丁,不仅可以减轻MI的增加,而且可以进行面向对象的自监督预训练,以提高下一阶段密集感知性能。通过大量实验,我们证明了我们提出的混合自编码器(MixedAE)在不同下游任务上的迁移结果达到了最先进的水平,并且具有显著的效率。具体而言,我们的MixedAE在标准的ViT-Base上,对于ImageNet-1K、ADE20K和COCO越过MAE,分别达到了+0.3%的准确度、+1.7的mIoU和+0.9的AP。此外,MixedAE在加速训练2倍的同时,超越了iBOT,这是一种强大的MIM方法,结合了实例鉴别。据我们所知,这是第一篇从预文本任务设计的角度考虑MIM混合的工作。该代码将可用。