Knowledge distillation (KD), transferring knowledge from a cumbersome teacher model to a lightweight student model, has been investigated to design efficient neural architectures. Generally, the objective function of KD is the Kullback-Leibler (KL) divergence loss between the softened probability distributions of the teacher model and the student model with the temperature scaling hyperparameter tau. Despite its widespread use, few studies have discussed the influence of such softening on generalization. Here, we theoretically show that the KL divergence loss focuses on the logit matching when tau increases and the label matching when tau goes to 0 and empirically show that the logit matching is positively correlated to performance improvement in general. From this observation, we consider an intuitive KD loss function, the mean squared error (MSE) between the logit vectors, so that the student model can directly learn the logit of the teacher model. The MSE loss outperforms the KL divergence loss, explained by the difference in the penultimate layer representations between the two losses. Furthermore, we show that sequential distillation can improve performance and that KD, particularly when using the KL divergence loss with small tau, mitigates the label noise. The code to reproduce the experiments is publicly available online at https://github.com/jhoon-oh/kd_data/.
翻译:知识蒸馏(KD) 将知识从繁琐的教师模式转换为轻量级学生模式, 知识蒸馏(KD) 将知识从繁琐的教师模式转移至轻量级学生模式, 已经对设计高效的神经结构进行了调查。 一般来说, KD 的目标功能是 Kullback- Leiber (KL) 教师模式的软化概率分布和学生模式与温度缩放超分化仪之间的差差差损失。 尽管使用范围很广,但很少有研究讨论过这种软化对一般化的影响。 这里, 我们理论上显示 KL 差异损失的焦点是当 Tau 上升到 0 和 经验性能匹配时的标签匹配。 我们从这个观察中发现, logbackstillation 匹配与一般性业绩的改善是积极的。 我们考虑到一个直观的 KD 损失函数, 即对对日志矢量值的差值( MSE), 使学生模型直接学习教师模式的逻辑对一般化。 MESE 超越了 KL 差异损失,,, 由两个损失的倒数层/ 表示的不同解释解释。 此外, 我们显示, 连续蒸馏蒸馏可以改善业绩/ 和 KD daldualdaldaldald, 当使用 KD 的 将 以 以 演示号为在线的 向 。