Knowledge Distillation (KD) is a prominent neural model compression technique that heavily relies on teacher network predictions to guide the training of a student model. Considering the ever-growing size of pre-trained language models (PLMs), KD is often adopted in many NLP tasks involving PLMs. However, it is evident that in KD, deploying the teacher network during training adds to the memory and computational requirements of training. In the computer vision literature, the necessity of the teacher network is put under scrutiny by showing that KD is a label regularization technique that can be replaced with lighter teacher-free variants such as the label-smoothing technique. However, to the best of our knowledge, this issue is not investigated in NLP. Therefore, this work concerns studying different label regularization techniques and whether we actually need them to improve the fine-tuning of smaller PLM networks on downstream tasks. In this regard, we did a comprehensive set of experiments on different PLMs such as BERT, RoBERTa, and GPT with more than 600 distinct trials and ran each configuration five times. This investigation led to a surprising observation that KD and other label regularization techniques do not play any meaningful role over regular fine-tuning when the student model is pre-trained. We further explore this phenomenon in different settings of NLP and computer vision tasks and demonstrate that pre-training itself acts as a kind of regularization, and additional label regularization is unnecessary.
翻译:翻译摘要:知识蒸馏(Knowledge Distillation)是一种主要的神经模型压缩技术,大量依赖于教师网络预测来指导学生模型的训练。考虑到预先训练的语言模型(PLMs)的不断增长,KD通常被采用在许多涉及PLMs的NLP任务中。然而,显然在KD中,在训练期间部署教师网络会增加训练过程中的存储和计算要求。在计算机视觉文献中,通过展示KD是一种标签规范化技术,可以使用较轻的无教师变体例如标签平滑技术来替代教师网络,这一问题受到了审查。然而,据我们所知,在NLP方面,这个问题没有得到研究。因此,这项工作关注研究不同的标签规范化技术,以及我们是否确实需要它们来改善下游任务中较小的PLM网络的微调。在这方面,我们对不同的PLM(E、BERT,RoBERTa和GPT)进行了全面的一系列实验,进行了600多个独立试验,并对每个配置运行了五次。这项研究得出了一个令人惊讶的发现,即当学生模型经过预训练时,KD和其他标签规范化技术在正常的微调过程中并没有起到任何有意义的作用。我们进一步探讨了这种现象在NLP和计算机视觉任务的不同设置中,并证明了预训练本身就具有一定的正则化作用,不需要额外的标签规范化。