Data-free knowledge distillation (DFKD) aims at training lightweight student networks from teacher networks without training data. Existing approaches mainly follow the paradigm of generating informative samples and progressively updating student models by targeting data priors, boundary samples or memory samples. However, it is difficult for the previous DFKD methods to dynamically adjust the generation strategy at different training stages, which in turn makes it difficult to achieve efficient and stable training. In this paper, we explore how to teach students the model from a curriculum learning (CL) perspective and propose a new approach, namely "CuDFKD", i.e., "Data-Free Knowledge Distillation with Curriculum". It gradually learns from easy samples to difficult samples, which is similar to the way humans learn. In addition, we provide a theoretical analysis of the majorization minimization (MM) algorithm and explain the convergence of CuDFKD. Experiments conducted on benchmark datasets show that with a simple course design strategy, CuDFKD achieves the best performance over state-of-the-art DFKD methods and different benchmarks, such as 95.28\% top1 accuracy of the ResNet18 model on CIFAR10, which is better than training from scratch with data. The training is fast, reaching the highest accuracy of 90\% within 30 epochs, and the variance during training is stable. Also in this paper, the applicability of CuDFKD is also analyzed and discussed.
翻译:没有数据的知识蒸馏(DFKD)旨在培训来自教师网络的轻量级学生网络,而没有培训数据,现有方法主要遵循生成信息样本和逐步更新学生模型的模式,具体针对数据前置、边界样本或记忆样本,然而,以往的DFKD方法难以在不同培训阶段动态调整生成战略,这反过来又使得难以实现高效和稳定的培训。在本文中,我们探索如何从课程学习(CL)的角度向学生教授模型,并提出新的方法,即“CuDFKD”,即“Data-free知识与课程的可应用性”,即“Data-free知识蒸馏”模式。它逐渐从简单样本到困难样本学习,这与人类学习的方式相似。此外,我们对主要化最小化(MM)算法进行理论分析,并解释CUDFKDD的趋同性实验表明,通过简单的课程设计战略,CUDFDKD在州-艺术方法和不同基准方面达到最佳业绩,例如95.28_1级的顶级样本和最精确度,在90KFAR18的训练中,也是在CFAR10的快速的精确度数据中,在CFAR18的精确度中进行。