Masked Autoencoders (MAE) have been prevailing paradigms for large-scale vision representation pre-training. By reconstructing masked image patches from a small portion of visible image regions, MAE forces the model to infer semantic correlation within an image. Recently, some approaches apply semantic-rich teacher models to extract image features as the reconstruction target, leading to better performance. However, unlike the low-level features such as pixel values, we argue the features extracted by powerful teacher models already encode rich semantic correlation across regions in an intact image.This raises one question: is reconstruction necessary in Masked Image Modeling (MIM) with a teacher model? In this paper, we propose an efficient MIM paradigm named MaskAlign. MaskAlign simply learns the consistency of visible patch features extracted by the student model and intact image features extracted by the teacher model. To further advance the performance and tackle the problem of input inconsistency between the student and teacher model, we propose a Dynamic Alignment (DA) module to apply learnable alignment. Our experimental results demonstrate that masked modeling does not lose effectiveness even without reconstruction on masked regions. Combined with Dynamic Alignment, MaskAlign can achieve state-of-the-art performance with much higher efficiency. Code and models will be available at https://github.com/OpenPerceptionX/maskalign.
翻译:蒙面自动校验器(MAE)一直是大规模视觉显示前训练的流行范例。 通过从少量可见图像区域重建遮面图像补丁, MAE 将模型强制在图像中推断出语义相关性。 最近, 一些方法将语义丰富的教师模型用作重建目标, 导致更好的业绩。 然而, 与像素价值等低级别特征不同, 我们争论的是, 强大的教师模型已经将各地区丰富的语义相关性编码成完整图像。 这就提出了一个问题: 需要用教师模型重建遮面图像模型吗? 在本文中, 我们提议一个名为 MaskAlign 的高效MIM 模式。 MaskAlign 简单地学习学生模型所提取的可见补丁特征的一致性, 以及教师模型所提取的完整图像特征。 为了进一步推进业绩并解决学生和教师模型之间输入不一致的问题, 我们建议一个动态匹配(DA) 模块来应用可学习的校正校正校正。 我们的实验结果显示, 蒙面模型即使没有重建, 也并不失去效果。 与高端标准/ 将实现 AS- glasmax- codeal com com commagistration compil be compal be compil be compilateal