We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning that assigns the Dirichlet distribution over each layer's channels in convolutional layers (or neurons in fully-connected layers) and estimates the parameters of the distribution over these units using variational inference. The learned distribution allows us to remove unimportant units, resulting in a compact architecture containing only crucial features for a task at hand. The number of newly introduced Dirichlet parameters is only linear in the number of channels, which allows for rapid training, requiring as little as one epoch to converge. We perform extensive experiments, in particular on larger architectures such as VGG and ResNet (45% and 58% compression rate, respectively) where our method achieves the state-of-the-art compression performance and provides interpretable features as a by-product.
翻译:我们引入了Drichlet 修剪工艺,这是将大型神经网络模型转换成压缩模型的一种新型后处理技术。 Dirichlet 修剪是一种结构化修剪工艺,将Drichlet分布在卷发层(或完全连接层中的神经元)的每个层的管道上,并使用变异推力来估计这些单元的分布参数。学得的分布使我们能够去除无关紧要的单元,从而形成一个只包含手头任务关键特征的紧凑结构。新引入的Drichlet 参数数量在频道数量上只是线性,可以进行快速培训,只需要一个小于一个要聚集的时代即可。我们进行了广泛的实验,特别是在VGG和ResNet(分别为45%和58%的压缩率)等大型结构上,我们的方法可以达到最先进的压缩性能,并作为副产品提供可解释的特征。