The remarkable performance of deep Convolutional neural networks (CNNs) is generally attributed to their deeper and wider architectures, which can come with significant computational costs. Pruning neural networks has thus gained interest since it effectively lowers storage and computational costs. In contrast to weight pruning, which results in unstructured models, structured pruning provides the benefit of realistic acceleration by producing models that are friendly to hardware implementation. The special requirements of structured pruning have led to the discovery of numerous new challenges and the development of innovative solutions. This article surveys the recent progress towards structured pruning of deep CNNs. We summarize and compare the state-of-the-art structured pruning techniques with respect to filter ranking methods, regularization methods, dynamic execution, neural architecture search, the lottery ticket hypothesis, and the applications of pruning. While discussing structured pruning algorithms, we briefly introduce the unstructured pruning counterpart to emphasize their differences. Furthermore, we provide insights into potential research opportunities in the field of structured pruning. A curated list of neural network pruning papers can be found at https://github.com/he-y/Awesome-Pruning
翻译:深层革命神经网络(CNNs)的显著表现一般归因于其深层和广度结构的显著表现,这些结构可以带来巨大的计算成本。 神经网络由于有效地降低储存和计算成本,因此获得了兴趣。 与重量裁剪相比,结构化的剪裁产生了结构化模型,通过制作对硬件实施有利的模型,提供了现实加速的好处。 结构化剪裁的特殊要求导致发现许多新的挑战并开发了创新解决方案。 文章调查了在结构化剪裁深CNN方面的最新进展。 我们总结并比较了在过滤排级方法、正规化方法、动态执行、神经结构搜索、彩票假设和修剪裁应用方面最先进的结构化裁剪裁技术。 在讨论结构化裁剪裁算法的同时,我们简要地介绍了结构化的剪裁对应方以强调其差异。 此外,我们提供了对结构化裁剪剪裁领域的潜在研究机会的深入见解。 可在 http://github/Acom中找到神经网络剪裁论文的缩列表。</s>