Network pruning is a widely used technique for effectively compressing Deep Neural Networks with little to no degradation in performance during inference. Iterative Magnitude Pruning (IMP) is one of the most established approaches for network pruning, consisting of several iterative training and pruning steps, where a significant amount of the network's performance is lost after pruning and then recovered in the subsequent retraining phase. While commonly used as a benchmark reference, it is often argued that a) it reaches suboptimal states by not incorporating sparsification into the training phase, b) its global selection criterion fails to properly determine optimal layer-wise pruning rates and c) its iterative nature makes it slow and non-competitive. In light of recently proposed retraining techniques, we investigate these claims through rigorous and consistent experiments where we compare IMP to pruning-during-training algorithms, evaluate proposed modifications of its selection criterion and study the number of iterations and total training time actually required. We find that IMP with SLR for retraining can outperform state-of-the-art pruning-during-training approaches without or with only little computational overhead, that the global magnitude selection criterion is largely competitive with more complex approaches and that only few retraining epochs are needed in practice to achieve most of the sparsity-vs.-performance tradeoff of IMP. Our goals are both to demonstrate that basic IMP can already provide state-of-the-art pruning results on par with or even outperforming more complex or heavily parameterized approaches and also to establish a more realistic yet easily realisable baseline for future research.
翻译:网络运行是有效压缩深神经网络的一种广泛使用的技术,在推算期间,其性能很少甚至没有退化。 循环磁力二次曲线(IMP)是网络运行的最既定方法之一,包括若干迭代培训和预选步骤,其中网络运行在修剪后损失了大量的性能,然后在随后的再培训阶段中恢复。 通常用作基准参考,但经常认为,a)它通过不将透析法纳入培训阶段而到达不最优化的国家,b)它的全球选择标准未能正确确定最佳的层向调整率,c)它的迭接性使得其速度缓慢和非竞争性。根据最近提出的再培训技术,我们通过严格和一致的实验来调查这些主张,我们将IMP与修剪裁-培训算法进行比较,评价其选择标准的拟议修改,并研究再生标准的数量和实际需要的全部培训时间。我们发现,与SL公司一起进行快速再培训的IMP(SL)可能超过当前水平,而复杂的分解率调整方法也无法恰当地确定最佳的分层再培训方法,而在不进行最具有竞争力或最激烈的再培训标准的标准标准上,只有很少的升级标准。