Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and, thus, are too resource-hungry and computation-intensive to suit low-capability devices or applications with strict latency requirements. One potential remedy for this is model compression, which has attracted a lot of research attention. Here, we summarize the research in compressing Transformers, focusing on the especially popular BERT model. In particular, we survey the state of the art in compression for BERT, we clarify the current best practices for compressing large-scale Transformer models, and we provide insights into the workings of various methods. Our categorization and analysis also shed light on promising future research directions for achieving lightweight, accurate, and generic NLP models.
翻译:预先培训的以变异器为基础的模型在各种自然语言处理(NLP)任务中取得了最先进的表现,然而,这些模型往往有数十亿项参数,因此,资源短缺和计算过于密集,无法适应低能力装置或具有严格潜伏要求的应用,其中一个潜在的补救办法是模型压缩,这引起了许多研究的注意。在这里,我们总结了压缩变异器的研究,重点是特别流行的BERT模型。我们特别调查了BERT压缩的先进状态,澄清了目前压缩大规模变异器模型的最佳做法,并对各种方法的运作情况提供了深刻的见解。我们的分类和分析还揭示了实现轻量、准确和通用的NLP模型的有希望的未来研究方向。