Transformer-based pre-trained language models have significantly improved the performance of various natural language processing (NLP) tasks in the recent years. While effective and prevalent, these models are usually prohibitively large for resource-limited deployment scenarios. A thread of research has thus been working on applying network pruning techniques under the pretrain-then-finetune paradigm widely adopted in NLP. However, the existing pruning results on benchmark transformers, such as BERT, are not as remarkable as the pruning results in the literature of convolutional neural networks (CNNs). In particular, common wisdom in pruning CNN states that sparse pruning technique compresses a model more than that obtained by reducing number of channels and layers (Elsen et al., 2020; Zhu and Gupta, 2017), while existing works on sparse pruning of BERT yields inferior results than its small-dense counterparts such as TinyBERT (Jiao et al., 2020). In this work, we aim to fill this gap by studying how knowledge are transferred and lost during the pre-train, fine-tune, and pruning process, and proposing a knowledge-aware sparse pruning process that achieves significantly superior results than existing literature. We show for the first time that sparse pruning compresses a BERT model significantly more than reducing its number of channels and layers. Experiments on multiple data sets of GLUE benchmark show that our method outperforms the leading competitors with a 20-times weight/FLOPs compression and neglectable loss in prediction accuracy.
翻译:近些年来,基于预先培训的变换语言模型大大提高了各种自然语言处理(NLP)任务的业绩。这些模型虽然有效且普遍,但对于资源有限的部署情景而言,其规模通常高得令人望而却步。因此,在NLP广泛采用的先入先入先入、先入后再入式模式的模式下,正在运用网络修剪技术。然而,基准变压器的现有修剪结果不如TinyBERT(Jiao et al.,2020年)等基准变压器的修剪结果那么显著。在这项工作中,我们的目标是通过研究知识的转移和丢失如何在预入选CNN的精益中,稀少的理算技术使模型比通过减少频道和层(Elsen等人等人,2020年;Zhu和Gupta,2017年)获得的模型裁剪裁技术。我们的目标是填补这一空白,通过研究知识如何转移和丢失,在BTREER的模型前的精度、精选的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细过程中,展示展示展示过程显示其现有精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的精细的