The increasing size of generative Pre-trained Language Models (PLMs) has greatly increased the demand for model compression. Despite various methods to compress BERT or its variants, there are few attempts to compress generative PLMs, and the underlying difficulty remains unclear. In this paper, we compress generative PLMs by quantization. We find that previous quantization methods fail on generative tasks due to the \textit{homogeneous word embeddings} caused by reduced capacity, and \textit{varied distribution of weights}. Correspondingly, we propose a token-level contrastive distillation to learn distinguishable word embeddings, and a module-wise dynamic scaling to make quantizers adaptive to different modules. Empirical results on various tasks show that our proposed method outperforms the state-of-the-art compression methods on generative PLMs by a clear margin. With comparable performance with the full-precision models, we achieve 14.4x and 13.4x compression rates on GPT-2 and BART, respectively.
翻译:培养前语言模型(PLM)规模的扩大大大增加了对模型压缩的需求。 尽管用各种方法压缩 BERT 或其变体,但几乎没有试图压缩基因化的PLM, 根本的困难仍然不清楚。 在本文中,我们通过量化压缩基因化的PLM。我们发现,由于能力下降和重量分布变异,先前的量化方法在基因化任务上失败了。 相应地,我们建议采用象征性的对比蒸馏法,学习可辨别的字嵌入法,并采用模块化的动态缩放法,使量化器适应不同的模块。 各项任务的经验结果表明,我们提出的方法明显超越了对基因化PLM的当前压缩方法。 我们与全精度模型的类似性能,我们分别在GPT-2和BART分别实现了14.4x压缩率和13.4x压缩率。