We introduce a simple yet effective distillation framework that is able to boost the vanilla ResNet-50 to 80%+ Top-1 accuracy on ImageNet without tricks. We construct such a framework through analyzing the problems in the existing classification system and simplify the base method ensemble knowledge distillation via discriminators by: (1) adopting the similarity loss and discriminator only on the final outputs and (2) using the average of softmax probabilities from all teacher ensembles as the stronger supervision. Intriguingly, three novel perspectives are presented for distillation: (1) weight decay can be weakened or even completely removed since the soft label also has a regularization effect; (2) using a good initialization for students is critical; and (3) one-hot/hard label is not necessary in the distillation process if the weights are well initialized. We show that such a straight-forward framework can achieve state-of-the-art results without involving any commonly-used techniques, such as architecture modification; outside training data beyond ImageNet; autoaug/randaug; cosine learning rate; mixup/cutmix training; label smoothing; etc. Our method obtains 80.67% top-1 accuracy on ImageNet using a single crop-size of 224x224 with vanilla ResNet-50, outperforming the previous state-of-the-arts by a significant margin under the same network structure. Our result can be regarded as a strong baseline using knowledge distillation, and to our best knowledge, this is also the first method that is able to boost vanilla ResNet-50 to surpass 80% on ImageNet without architecture modification or additional training data. On smaller ResNet-18, our distillation framework consistently improves from 69.76% to 73.19%, which shows tremendous practical values in real-world applications. Our code and models are available at: https://github.com/szq0214/MEAL-V2.
翻译:我们引入了一个简单而有效的蒸馏框架, 能够将 Villa ResNet-50 提升到 80 ⁇ Top-1 在图像网络上的精度。 我们通过分析现有分类系统的问题, 并通过歧视者简化基本方法的全套知识蒸馏程序来构建这样一个框架 : (1) 仅对最终产出采用相似性损失和区分符, (2) 使用所有教师组合的软式马克斯概率平均值作为更强的监管。 有趣的是, 为蒸馏提供了三个新视角:(1) 重量衰减可以被削弱,甚至完全去除, 因为软标签也具有正规化效果; (2) 使用良好的学生初始化功能至关重要; (3) 如果重量完全初始化, 在蒸馏过程中不需要一流/ 硬标签 。 我们的蒸馏过程中, 这种直线式框架可以实现最新结果, 而没有任何常用的技术, 比如架构的修改; 超过图像网络的外部培训数据; 自动变压/变压; 精度的变压; 更硬的学习速度率; 使用不固定的On- climix 培训; 将前半级的网络结构标定, 我们的高级的机的精度结构显示为正态结构。