Despite the growing availability of high-capacity computational platforms, implementation complexity still has been a great concern for the real-world deployment of neural networks. This concern is not exclusively due to the huge costs of state-of-the-art network architectures, but also due to the recent push towards edge intelligence and the use of neural networks in embedded applications. In this context, network compression techniques have been gaining interest due to their ability for reducing deployment costs while keeping inference accuracy at satisfactory levels. The present paper is dedicated to the development of a novel compression scheme for neural networks. To this end, a new $\ell_0$-norm-based regularization approach is firstly developed, which is capable of inducing strong sparseness in the network during training. Then, targeting the smaller weights of the trained network with pruning techniques, smaller yet highly effective networks can be obtained. The proposed compression scheme also involves the use of $\ell_2$-norm regularization to avoid overfitting as well as fine tuning to improve the performance of the pruned network. Experimental results are presented aiming to show the effectiveness of the proposed scheme as well as to make comparisons with competing approaches.
翻译:尽管高容量计算平台的可用性越来越大,但实施的复杂性仍然是真正部署神经网络的极大关注,这不仅仅是因为最新网络结构的成本巨大,也是由于最近对边缘智能的推动和在嵌入应用中使用神经网络。在这方面,由于网络压缩技术能够降低部署成本,同时将精确度保持在令人满意的水平上,因此越来越受关注。本文件致力于为神经网络开发新的压缩计划。为此,首先开发了一个新的基于$ell_0美元的基于北线的正规化方法,在培训期间能够导致网络的高度稀少。随后,将经过训练的网络的较小重量与操纵技术挂钩,可以缩小但效率很高的网络。拟议的压缩计划还涉及使用$>2美元-北线规范,以避免过度适应和改进运行网络的性能。实验结果旨在显示拟议计划的有效性,并与竞争方法进行比较。