In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT. In each iteration, SKD samples a teacher model from a pre-defined teacher ensemble, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner. Sampling distribution plays an important role in SKD. We heuristically present three types of sampling distributions to assign appropriate probabilities for multi-level teacher models. SKD has two advantages: 1) it can preserve the diversities of multi-level teacher models via stochastically sampling single teacher model in each iteration, and 2) it can also improve the efficacy of knowledge distillation via multi-level teacher models when large capacity gap exists between the teacher model and the student model. Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT$_{\rm BASE}$ model by 40% while retaining 99.5% performances of language understanding and being 100% faster.
翻译:在本文中,我们建议Stochastic知识蒸馏(SKDD)以获得被称为SKDBERT的紧凑BERT式语言模型。在每次迭代中,SKD从一个预先定义的教师合用模式(由具有多层次能力的多种教师模式组成)中抽取一个教师模型,以一对一的方式将知识传输到学生模型中。抽样分布在SKDD中起着重要作用。我们以超自然的方式展示了三种抽样分布,为多级教师模式分配适当的概率。SKD有两种优势:(1)它可以通过在每种迭代中通过随机抽样单一教师模式保存多级教师模型的多样性,和(2)它还可以通过多级教师模型提高知识蒸馏的功效,因为教师模式与学生模型之间存在巨大的能力差距。GLUE基准的实验结果表明,SKDBERT将多级教师模型的大小降低40%,同时保留语言理解的99.5%的性能并更快地达到100%。