Convolutional neural networks (CNNs) have succeeded in many practical applications. However, their high computation and storage requirements often make them difficult to deploy on resource-constrained devices. In order to tackle this issue, many pruning algorithms have been proposed for CNNs, but most of them can't prune CNNs to a reasonable level. In this paper, we propose a novel algorithm for training and pruning CNNs based on the recursive least squares (RLS) optimization. After training a CNN for some epochs, our algorithm combines inverse input autocorrelation matrices and weight matrices to evaluate and prune unimportant input channels or nodes layer by layer. Then, our algorithm will continue to train the pruned network, and won't do the next pruning until the pruned network recovers the full performance of the old network. Besides for CNNs, the proposed algorithm can be used for feedforward neural networks (FNNs). Three experiments on MNIST, CIFAR-10 and SVHN datasets show that our algorithm can achieve the more reasonable pruning and have higher learning efficiency than other four popular pruning algorithms.
翻译:革命性神经网络(CNNs)在许多实际应用中取得了成功。 但是,它们的高计算和存储要求往往使它们难以在资源限制的装置上部署。 为了解决这个问题,许多运行算法已被推荐给CNN, 但大多数它们不能将CNN推向合理的水平。 在本文中, 我们提议了一个基于循环最小方(RLS)优化的培训和运行CNN的新型算法。 在为某些地方培训了CNN之后, 我们的算法将反输入的自动关系矩阵和重量矩阵组合起来, 以便评估并按层处理不重要的输入渠道或节点层。 然后, 我们的算法将继续培训纯化网络, 并且不会进行下一次运行, 直到纯化网络恢复旧网络的全部功能。 除了CNN以外, 拟议的算法可以用于向上向上的神经网络(FNNSs ) 。 在MNIST、 CIFAR- 10 和 SVHN 数据集的三项实验显示, 我们的算法可以实现比其他四种更合理的快速和高学习效率的算法。