Large vision Transformers (ViTs) driven by self-supervised pre-training mechanisms achieved unprecedented progress. Lightweight ViT models limited by the model capacity, however, benefit little from those pre-training mechanisms. Knowledge distillation defines a paradigm to transfer representations from large (teacher) models to small (student) ones. However, the conventional single-stage distillation easily gets stuck on task-specific transfer, failing to retain the task-agnostic knowledge crucial for model generalization. In this study, we propose generic-to-specific distillation (G2SD), to tap the potential of small ViT models under the supervision of large models pre-trained by masked autoencoders. In generic distillation, decoder of the small model is encouraged to align feature predictions with hidden representations of the large model, so that task-agnostic knowledge can be transferred. In specific distillation, predictions of the small model are constrained to be consistent with those of the large model, to transfer task-specific features which guarantee task performance. With G2SD, the vanilla ViT-Small model respectively achieves 98.7%, 98.1% and 99.3% the performance of its teacher (ViT-Base) for image classification, object detection, and semantic segmentation, setting a solid baseline for two-stage vision distillation. Code will be available at https://github.com/pengzhiliang/G2SD.
翻译:由自我监督的训练前机制驱动的大型视觉变异器(ViTs)在自我监督的训练前机制下取得了前所未有的进展。但受模型能力限制的轻量级ViT模型很少从这些训练前机制中受益。知识蒸馏确定了一种模式,将代表制从大型(教师)模型转移到小型(学生)模型。然而,传统的单阶段蒸馏过程很容易被任务特定转让所困,未能保留对模式一般化至关重要的任务-不可知性知识。在本研究中,我们提议采用通用到特定任务的蒸馏(G2SD),在由蒙面自动读器预先训练的大型模型监督下挖掘小型ViT模型的潜力。在一般蒸馏过程中,鼓励小型模型的解析器将特征预测与大型模型的隐藏式表达方式(Spregistration)相匹配。在特定的蒸馏中,小型模型的预测与大模型的普通模型(G2)中,用于保证任务性能的特质化特性。在G2号、ViL-distr-Small模型中,将分别实现98.3%的Syal-Basimal 图像的98.1和98.</s>