Pruning is an effective method to reduce the memory footprint and FLOPs associated with neural network models. However, existing pruning methods often result in significant accuracy degradation for moderate pruning levels. To address this problem, we introduce a new Hessian Aware Pruning (HAP) method which uses second-order sensitivity as a metric for structured pruning. In particular, we use the Hessian trace to find insensitive parameters in the neural network. This is different than magnitude based pruning methods, which prune small weight values. We also propose a new neural implant method, which replaces pruned spatial convolutions with point-wise convolution. We show that this method can improve the accuracy of pruned models while preserving the model size. We test HAP on multiple models (ResNet56, WideResNet32, PreResNet29, VGG16) on CIFAR-10 and (ResNet50) on ImageNet, and we achieve new state-of-the-art results. Specifically, HAP achieves 94.3\% accuracy ($<0.1\%$ degradation) on PreResNet29 (CIFAR-10), with more than 70\% of parameters pruned. In comparison to EigenDamage~\cite{wang2019eigendamage}, we achieve up to 1.2\% higher accuracy with fewer parameters and FLOPs. Moreover, for ResNet50 HAP achieves 75.1\% top-1 accuracy (0.5\% degradation) on ImageNet, after pruning more than half of the parameters. In comparison to prior state-of-the-art of HRank~\cite{lin2020hrank}, we achieve up to 2\% higher accuracy with fewer parameters and FLOPs. The framework has been open source and available online.
翻译:缓冲是减少内存足迹和与神经网络模型相关的 FLOP 的有效方法。 但是, 现有的修剪方法往往导致中度修剪水平的精度显著下降。 为了解决这个问题, 我们引入了一种新的赫森人意识普鲁宁(HAP) 方法, 使用二阶敏感度作为结构修剪的度量。 特别是, 我们使用赫森人追踪在神经网络中找到不敏感的参数。 这与基于星级的裁剪方法不同, 其精度值微小。 我们还提议一种新的神经植入方法, 以中度调调调的调控参数取代了中度调剪裁剪断的空间共振动参数。 为了保存模型大小, 我们用多个模型测试二阶级的精度( ResNet56, OpyResNet32, PrepreResNet29, VGGL16) 和(Resnel50) 在图像网络网络网络中找到新的状态精度20 。 具体地说, HAP 半调 精确度为 < 0. 0.1$ developations) 。 在Preal- 29 (CI- dal- 和 10) 之后, 我们用前端平级的直径20 和直径比 Eration 。