Neural network pruning techniques reduce the number of parameters without compromising predicting ability of a network. Many algorithms have been developed for pruning both over-parameterized fully-connected networks (FCNs) and convolutional neural networks (CNNs), but analytical studies of capabilities and compression ratios of such pruned sub-networks are lacking. We theoretically study the performance of two pruning techniques (random and magnitude-based) on FCNs and CNNs. Given a target network {whose weights are independently sampled from appropriate distributions}, we provide a universal approach to bound the gap between a pruned and the target network in a probabilistic sense. The results establish that there exist pruned networks with expressive power within any specified bound from the target network.
翻译:神经网络运行技术在不损害网络预测能力的情况下减少参数数量。许多算法已经开发出来,用于对超参数的完全连接网络(FCNs)和目标神经网络(CNNs)进行运行,但缺乏对此类经切割的子网络的能力和压缩率的分析研究。我们理论上研究了FCNs和CNN上两种运行技术的性能(随机和量级)。鉴于目标网络的重量从适当的分布中独立抽样,我们提供了一种通用的方法,以概率感来缩小被切割的网络和目标网络之间的差距。结果证明,在目标网络的任何特定界限内,都存在着具有直观能力的运行网络。