Pruning is an effective way to reduce the huge inference cost of Transformer models. However, prior work on pruning Transformers requires retraining the models. This can add high training cost and high complexity to model deployment, making it difficult to use in many practical situations. To address this, we propose a fast post-training pruning framework for Transformers that does not require any retraining. Given a resource constraint and a sample dataset, our framework automatically prunes the Transformer model using structured sparsity methods. To retain high accuracy without retraining, we introduce three novel techniques: (i) a lightweight mask search algorithm that finds which heads and filters to prune based on the Fisher information; (ii) mask rearrangement that complements the search algorithm; and (iii) mask tuning that reconstructs the output activations for each layer. We apply our method to BERT-base and DistilBERT, and we evaluate its effectiveness on GLUE and SQuAD benchmarks. Our framework achieves up to 2.0x reduction in FLOPs and 1.56x speedup in inference latency, while maintaining < 1% loss in accuracy. Importantly, our framework prunes Transformers in less than 3 minutes on a single GPU, which is over two orders of magnitude faster than existing pruning approaches that retrain the models.
翻译:普鲁宁是降低变异模型巨大推断成本的有效方法。 然而, 先前的修剪变异器工程需要再培训模型。 这可能会增加高培训成本, 并且给模型部署增加高复杂性, 使许多实际情况下难以使用。 为了解决这个问题, 我们为变异器提出一个不需要再培训的快速后修剪框架。 鉴于资源限制和样本数据集, 我们的框架会自动使用结构宽度方法来降低变异器模型。 为了保持高精度而不进行再培训, 我们引入了三种新颖技术 :(一) 轻量的遮罩搜索算法, 发现根据渔业信息将头部和过滤到精度 ; (二) 掩罩重新布置, 以补充搜索算法; 和 (三) 遮罩, 重建每个层的输出启动程序。 我们将我们的方法应用到 BERT 基地和 DistilBERT, 我们评估其在 GLUE 和 SQAD 基准上的有效性。 为了保持高精度, 我们的框架可以在 FLOPs 和 1.56x 加速度延延延延延延延延, 同时保持 < 1 而不是GRI 的精度, 。