With the improvement of AI chips (e.g., GPU, TPU, and NPU) and the fast development of the Internet of Things (IoT), some robust deep neural networks (DNNs) are usually composed of millions or even hundreds of millions of parameters. Such a large model may not be suitable for directly deploying on low computation and low capacity units (e.g., edge devices). Knowledge distillation (KD) has recently been recognized as a powerful model compression method to decrease the model parameters effectively. The central concept of KD is to extract useful information from the feature maps of a large model (i.e., teacher model) as a reference to successfully train a small model (i.e., student model) in which the model size is much smaller than the teacher one. Although many KD methods have been proposed to utilize the information from the feature maps of intermediate layers in the teacher model, most did not consider the similarity of feature maps between the teacher model and the student model. As a result, it may make the student model learn useless information. Inspired by the attention mechanism, we propose a novel KD method called representative teacher key (RTK) that not only considers the similarity of feature maps but also filters out the useless information to improve the performance of the target student model. In the experiments, we validate our proposed method with several backbone networks (e.g., ResNet and WideResNet) and datasets (e.g., CIFAR10, CIFAR100, SVHN, and CINIC10). The results show that our proposed RTK can effectively improve the classification accuracy of the state-of-the-art attention-based KD method.
翻译:随着AI芯片(例如GPU、TPU和NPU)的改进,以及物联网(IoT)的快速发展,一些强大的深神经网络通常由数百万甚至数亿参数组成。这样的大型模型可能不适合直接部署在低计算和低容量单位(例如边缘装置)上;知识蒸馏(KD)最近被公认为是一种强大的模型压缩方法,以有效减少模型参数。KD的中心概念是从大型模型(即教师模型)的特征图中提取有用的信息,作为成功培训小型模型(即学生模型)的参考,模型规模比教师小得多。虽然许多KD方法都不适合直接部署在低计算和低容量单位(例如边缘装置)上;知识蒸馏(KD)最近被公认为是教师模型和学生模型之间的相似特征地图。因此,KD的核心概念可以使学生模型学习无用的资讯。在关注机制的启发下,我们建议采用KD-RE的新型模型,而S-RER的模型则称为教师核心数据库的关键方法。我们建议采用K-RIS的模拟方法。我们只是用一些新的K-R-Reral数据库,我们提议的S-R) 并用新的S-RIS的模型来改进了S-RIS-R(我们所使用的方法,我们提议的IS-RIS-RIS-R-R)