Unstructured neural network pruning algorithms have achieved impressive compression rates. However, the resulting - typically irregular - sparse matrices hamper efficient hardware implementations, leading to additional memory usage and complex control logic that diminishes the benefits of unstructured pruning. This has spurred structured coarse-grained pruning solutions that prune entire filters or even layers, enabling efficient implementation at the expense of reduced flexibility. Here we propose a flexible new pruning mechanism that facilitates pruning at different granularities (weights, kernels, filters/feature maps), while retaining efficient memory organization (e.g. pruning exactly k-out-of-n weights for every output neuron, or pruning exactly k-out-of-n kernels for every feature map). We refer to this algorithm as Dynamic Probabilistic Pruning (DPP). DPP leverages the Gumbel-softmax relaxation for differentiable k-out-of-n sampling, facilitating end-to-end optimization. We show that DPP achieves competitive compression rates and classification accuracy when pruning common deep learning models trained on different benchmark datasets for image classification. Relevantly, the non-magnitude-based nature of DPP allows for joint optimization of pruning and weight quantization in order to even further compress the network, which we show as well. Finally, we propose novel information theoretic metrics that show the confidence and pruning diversity of pruning masks within a layer.
翻译:无结构的神经网络修饰算法已经达到了令人印象深刻的压缩率。然而,由此产生的(通常是非常规的)稀释矩阵妨碍了高效的硬件实施,导致更多的记忆使用和复杂的控制逻辑,从而降低非结构裁剪的好处。这刺激了结构化的粗粗的裁剪处理解决方案,将整个过滤器或甚至层压缩起来,使高效的实施以降低灵活性为代价。在这里,我们提议了一个灵活的新修剪机制,为不同颗粒(重量、内核、过滤器/节能地图)的修剪补提供便利,同时保留高效的内存组织(例如,为每个输出神经运行完全的 k-out-n 重量修剪裁 k- out-n 重量,或为每个特性图绘制完全的 k-out- run- n 内核裁剪裁的螺旋管) 。我们称这种算法是动态的振荡性裁剪裁(DPP), 将Gumbel- 软性松动用于不同的K- 外采样, 便利端端到端端口端优化。我们显示,DP 内部的竞争性压缩率率率率率率比率和精度精确的精度精确的精度精确的精度精确度精确度, 显示我们用来显示一个共同的精度的精度的精度,最终的精度的精度的精度的精度的精度和精度的精度的精度。