Knowledge Distillation (KD) for Convolutional Neural Network (CNN) is extensively studied as a way to boost the performance of a small model. Recently, Vision Transformer (ViT) has achieved great success on many computer vision tasks and KD for ViT is also desired. However, besides the output logit-based KD, other feature-based KD methods for CNNs cannot be directly applied to ViT due to the huge structure gap. In this paper, we explore the way of feature-based distillation for ViT. Based on the nature of feature maps in ViT, we design a series of controlled experiments and derive three practical guidelines for ViT's feature distillation. Some of our findings are even opposite to the practices in the CNN era. Based on the three guidelines, we propose our feature-based method ViTKD which brings consistent and considerable improvement to the student. On ImageNet-1k, we boost DeiT-Tiny from 74.42% to 76.06%, DeiT-Small from 80.55% to 81.95%, and DeiT-Base from 81.76% to 83.46%. Moreover, ViTKD and the logit-based KD method are complementary and can be applied together directly. This combination can further improve the performance of the student. Specifically, the student DeiT-Tiny, Small, and Base achieve 77.78%, 83.59%, and 85.41%, respectively. The code is available at https://github.com/yzd-v/cls_KD.
翻译:85. 对革命神经网络(CNN)的知识蒸馏(KD)进行了广泛研究,作为提升小型模型性能的一种方法。最近,愿景变异器(ViT)在许多计算机视觉任务中取得了巨大成功,也希望ViT获得KD。然而,除了基于输出的登录KD外,其他基于功能的CNNKD方法不能直接适用于ViT,因为存在巨大的结构差距。在本文中,我们探索了ViT基于特性的蒸馏方式。根据ViT地貌图的性质,我们设计了一系列受控实验,并为ViT的特性蒸馏提出了三项实用指南。我们的一些发现甚至与CNN时代的做法背道而驰。基于三个指南,我们提出了我们基于功能的ViTKD方法,使学生得到一致和相当大的改进。在图像Net-1k上,我们将DeiT-Tiny的学生从74.42%提高到77.0 %,DeiT-Small从80.55%提高到81.95%,DeiT-Base从81.76 和Dei-DBal-Creal-Cload_Decal-K%, 和Lival-K46b-Tb-Bal-Ly-T-Lisal-T-T-T-T-Lis-Lis-T-T-T-Lislational-Bal-T-T-L),可以进一步应用。