How can we efficiently compress a model while maintaining its performance? Knowledge Distillation (KD) is one of the widely known methods for model compression. In essence, KD trains a smaller student model based on a larger teacher model and tries to retain the teacher model's level of performance as much as possible. However, existing KD methods suffer from the following limitations. First, since the student model is smaller in absolute size, it inherently lacks model capacity. Second, the absence of an initial guide for the student model makes it difficult for the student to imitate the teacher model to its fullest. Conventional KD methods yield low performance due to these limitations. In this paper, we propose Pea-KD (Parameter-efficient and accurate Knowledge Distillation), a novel approach to KD. Pea-KD consists of two main parts: Shuffled Parameter Sharing (SPS) and Pretraining with Teacher's Predictions (PTP). Using this combination, we are capable of alleviating the KD's limitations. SPS is a new parameter sharing method that increases the student model capacity. PTP is a KD-specialized initialization method, which can act as a good initial guide for the student. When combined, this method yields a significant increase in student model's performance. Experiments conducted on BERT with different datasets and tasks show that the proposed approach improves the student model's performance by 4.4\% on average in four GLUE tasks, outperforming existing KD baselines by significant margins.
翻译:如何在保持其性能的同时有效地压缩模型? 知识蒸馏(KD)是广为人知的模型压缩方法之一。 本质上, KD在更大的教师模型的基础上培训一个较小的学生模型,并试图尽可能保留教师模型的性能水平。 然而,现有的KD方法存在以下限制。 首先,由于学生模型在绝对规模上较小,它本身就缺乏模型能力。 其次,学生模型缺乏初始指南使得学生难以最充分地模仿教师模型。 常规的KD方法由于这些限制而产生低效。 在这个文件中,我们建议Pea-KD(光度和准确的知识蒸馏),这是对KD. Pea-KD的一种创新方法,由两个主要部分组成: 令人窒息的Pameter共享(SPA) 和教师预测(PTP) 的预培训。 使用这种组合,我们有能力减轻KD方法的局限性。 SPPS是一个新的参数共享方法,由于这些限制而提高了学生模型的能力。 PTP是K-D专业化初始化的模型方法,在学生的4个性化模型上,可以提高学生的成绩。