With the popularity of the recent Transformer-based models represented by BERT, GPT-3 and ChatGPT, there has been state-of-the-art performance in a range of natural language processing tasks. However, the massive computations, huge memory footprint, and thus high latency of Transformer-based models is an inevitable challenge for the cloud with high real-time requirement. To tackle the issue, we propose BBCT, a method of block-wise bit-compression for transformer without retraining. Our method achieves more fine-grained compression of the whole transformer, including embedding, matrix multiplication, GELU, softmax, layer normalization, and all the intermediate results. As a case, we compress an efficient BERT with the method of BBCT. Our benchmark test results on General Language Understanding Evaluation (GLUE) show that BBCT can achieve less than 1% accuracy drop in most tasks.
翻译:以BERT、GPT-3和ChatGPT为代表的最近以变换器为基础的模型受到欢迎,因此在一系列自然语言处理任务中出现了最先进的性能。然而,大规模计算、巨大的记忆足迹以及由此而来的变换器高潜度对于高实时要求的云层来说,是不可避免的挑战。为了解决这个问题,我们建议BBCT,这是不进行再培训的变换器的块状比特压缩方法。我们的方法使整个变换器的压缩更加精细,包括嵌入、矩阵倍增、GELU、软模、层正常化和所有中间结果。举例来说,我们用BBCT的方法压缩一个高效的变压器。我们在通用语言理解评价的基准测试结果(GLUE)显示,BCT在多数任务中可以达到不到1%的精确率下降。</s>