Current methods for pruning neural network weights iteratively apply magnitude-based pruning on the model weights and re-train the resulting model to recover lost accuracy. In this work, we show that such strategies do not allow for the recovery of erroneously pruned weights. To enable weight recovery, we propose a simple strategy called \textit{cyclical pruning} which requires the pruning schedule to be periodic and allows for weights pruned erroneously in one cycle to recover in subsequent ones. Experimental results on both linear models and large-scale deep neural networks show that cyclical pruning outperforms existing pruning algorithms, especially at high sparsity ratios. Our approach is easy to tune and can be readily incorporated into existing pruning pipelines to boost performance.
翻译:神经网络重量的当前修补方法迭接地对模型重量进行基于星等的修剪,并重新对由此形成的模型进行修补,以恢复失去的准确性。 在这项工作中,我们表明,这种战略不允许回收错误的修补重量。为了能够回收重量,我们建议了一个叫作\ textit{周期性修补的简单战略,要求修补时间表是定期的,并允许在一个周期中错误修补重量,以便在随后的周期中恢复。线性模型和大型深神经网络的实验结果显示,周期性修补结果超过了现有的修补算法,特别是在高水温比率的情况下。我们的方法很容易调和很容易被纳入现有的修补管以提振性能。