In the past few years, transformer-based pre-trained language models have achieved astounding success in both industry and academia. However, the large model size and high run-time latency are serious impediments to applying them in practice, especially on mobile phones and Internet of Things (IoT) devices. To compress the model, considerable literature has grown up around the theme of knowledge distillation (KD) recently. Nevertheless, how KD works in transformer-based models is still unclear. We tease apart the components of KD and propose a unified KD framework. Through the framework, systematic and extensive experiments that spent over 23,000 GPU hours render a comprehensive analysis from the perspectives of knowledge types, matching strategies, width-depth trade-off, initialization, model size, etc. Our empirical results shed light on the distillation in the pre-train language model and with relative significant improvement over previous state-of-the-arts(SOTA). Finally, we provide a best-practice guideline for the KD in transformer-based models.
翻译:在过去几年里,基于变压器的预先培训语言模型在行业和学术界都取得了惊人的成功,然而,庞大的模型规模和高运行时间延迟严重阻碍了这些模型的实际应用,特别是在移动电话和互联网上。为了压缩模型,最近围绕知识蒸馏(KD)的主题发展了大量文献。然而,基于变压器的模型中,KD是如何工作的,仍然不清楚。我们撕裂了KD的组成部分,并提议了一个统一的KD框架。通过这一框架,系统而广泛的实验花费了23 000个GPU小时以上的时间,从知识类型、匹配战略、宽度深度交易、初始化、模型大小等方面的角度进行了全面分析。我们的经验成果揭示了在培训前语言模型中的提炼,并且相对地大大改进了以前的国家艺术。最后,我们为以变压器为基础的模型的KD提供了最佳做法指南。