Data-free knowledge distillation (DFKD) aims at training lightweight student networks from large pretrained teacher networks without training data. Existing approaches follow the paradigm of generating informative samples and updating student models by targeting data priors, boundary samples, or memory samples. However, they don't dynamically adjust the generation strategy at different training stages, which in turn makes it DFKD difficult to achieve efficient and stable training. In this paper, we explore how to teach students the model from a dynamic perspective and propose a new approach, namely "CuDFKD", i.e., "\textbf{D}ata-\textbf{F}ree \textbf{K}nowledge \textbf{D}istillation with \textbf{Cu}rriculum". It dynamically learns from easy samples to difficult samples, which is similar to the human learning. In addition, we provide a theoretical analysis of the majorization minimization (MM) algorithm and explain the convergence of CuDFKD. Experiments conducted on benchmark datasets show that with a simple course design strategy, CuDFKD achieves the best performance over state-of-the-art DFKD methods and different benchmarks, even better than training from scratch with data. The training is fast, reaching the highest accuracy of 90\% within 15 epochs when distilling ResNet34 to ResNet18 in CIFAR10. Besides, the applicability of CuDFKD is also analyzed and discussed.
翻译:无数据知识蒸馏(DFKD)旨在培训来自未经培训的大型教师网络的轻量学生网络,而没有培训数据数据。现有方法遵循生成信息样本和更新学生模型的模式,针对的是数据前置、边界样本或记忆样本。然而,它们并不动态地调整不同培训阶段的生成战略,这反过来又使DFKD难以实现高效和稳定的培训。在本文中,我们探索如何从动态角度来教授模型,并提议一种新的方法,即“CuDFKD”,即“Textbf{D}Data-textbf{F}F}ree\textbf{Knowledge\tbf{K}更新学生模型。但是,它们不动态地调整不同培训阶段的生成战略,从简单的样本到困难的样本,这与人类学习相似。此外,我们从理论角度分析了主要化最小化(MMM)算法,并解释了CUDFDD的趋同。在基准数据集上进行的实验显示,从简单的课程设计战略的准确性,CUDFDF在15级的快速训练中实现最佳的成绩,CRFDFDFS的最优于快速训练。