Masked Image Modeling (MIM) achieves outstanding success in self-supervised representation learning. Unfortunately, MIM models typically have huge computational burden and slow learning process, which is an inevitable obstacle for their industrial applications. Although the lower layers play the key role in MIM, existing MIM models conduct reconstruction task only at the top layer of encoder. The lower layers are not explicitly guided and the interaction among their patches is only used for calculating new activations. Considering the reconstruction task requires non-trivial inter-patch interactions to reason target signals, we apply it to multiple local layers including lower and upper layers. Further, since the multiple layers expect to learn the information of different scales, we design local multi-scale reconstruction, where the lower and upper layers reconstruct fine-scale and coarse-scale supervision signals respectively. This design not only accelerates the representation learning process by explicitly guiding multiple layers, but also facilitates multi-scale semantical understanding to the input. Extensive experiments show that with significantly less pre-training burden, our model achieves comparable or better performance on classification, detection and segmentation tasks than existing MIM models.
翻译:蒙面图像建模(MIM)在自我监督的代表制学习中取得了杰出的成功。 不幸的是,MIM模型通常具有巨大的计算负担和缓慢的学习过程,这是工业应用的一个不可避免的障碍。虽然下层在MIM中起着关键作用,但现有的MIM模型只在编码器的顶层执行重建任务。下层没有明确指导,其补丁之间的相互作用仅用于计算新的激活。考虑到重建任务需要非三重交叉互动来解释目标信号,我们将其应用到多个地方层,包括下层和上层。此外,由于多层期望学习不同规模的信息,我们设计了地方多层重建,低层和上层分别重建微尺度和粗尺度的监督信号。这一设计不仅通过明确指导多层,加快了代表制式学习过程,而且还促进了对投入的多尺度的系统理解。广泛的实验表明,在培训前负担明显减少的情况下,我们的模型在分类、检测和分层任务上取得了与现有MIM模型相似或更好的业绩。</s>