As language models have grown in parameters and layers, it has become much harder to train and infer with them on single GPUs. This is severely restricting the availability of large language models such as GPT-3, BERT-Large, and many others. A common technique to solve this problem is pruning the network architecture by removing transformer heads, fully-connected weights, and other modules. The main challenge is to discern the important parameters from the less important ones. Our goal is to find strong metrics for identifying such parameters. We thus propose two strategies: Cam-Cut based on the GradCAM interpretations, and Smooth-Cut based on the SmoothGrad, for calculating the importance scores. Through this work, we show that our scoring functions are able to assign more relevant task-based scores to the network parameters, and thus both our pruning approaches significantly outperform the standard weight and gradient-based strategies, especially at higher compression ratios in BERT-based models. We also analyze our pruning masks and find them to be significantly different from the ones obtained using standard metrics.
翻译:随着语言模型在参数和层次上的发展,在单一GPU上培训和推算它们变得更加困难。这严重限制了大型语言模型的可用性,如GPT-3、BERT-Large和许多其他模型。解决该问题的一个共同技术是通过去除变压器头、完全连接的重量和其他模块来调整网络结构。主要的挑战在于从不太重要的参数中辨别重要参数。我们的目标是找到确定这些参数的强强度指标。我们因此提出了两个战略:基于格拉德卡姆解释的Cam-Cut和基于平滑格的滑板Cut,以计算重要分。我们通过这项工作表明,我们的评分功能能够给网络参数分配更加相关的任务计分,因此我们的裁分方法大大超过标准重量和梯基战略,特别是在基于BERT模型的更高压缩比率方面。我们还分析了我们的微调口罩,发现它们与使用标准指标获得的值有很大不同。