Filter pruning is a common method to achieve model compression and acceleration in deep neural networks (DNNs).Some research regarded filter pruning as a combinatorial optimization problem and thus used evolutionary algorithms (EA) to prune filters of DNNs. However, it is difficult to find a satisfactory compromise solution in a reasonable time due to the complexity of solution space searching. To solve this problem, we first formulate a multi-objective optimization problem based on a sub-network of the full model and propose a Sub-network Multiobjective Evolutionary Algorithm (SMOEA) for filter pruning. By progressively pruning the convolutional layers in groups, SMOEA can obtain a lightweight pruned result with better performance.Experiments on VGG-14 model for CIFAR-10 verify the effectiveness of the proposed SMOEA. Specifically, the accuracy of the pruned model with 16.56% parameters decreases by 0.28% only, which is better than the widely used popular filter pruning criteria.
翻译:过滤器运行是在深神经网络中实现模型压缩和加速的一个常见方法。 一些研究认为过滤器运行是一个组合优化问题,因此将进化算法(EA)用于提取 DNN的过滤器。 但是,由于解决方案空间搜索的复杂性,很难在合理的时间内找到令人满意的折中解决方案。 为了解决这个问题,我们首先根据完整模型的子网络来设计一个多目标优化问题,并提议一个用于过滤运行的子网络多目标进化阿尔高(SMOEA) 。 通过在分组中逐步处理卷积层, SMOEA 可以获得一个轻量的细化算法(EA), 并取得更好的性能。 关于 CIFAR- 10 的 VGG-14 模型的实验可以验证提议的 SMOEA 的有效性。 具体地说, 使用16.56%参数的修剪的模型的精度仅减少0. 280%, 这比广泛使用的普及过滤程序标准要好。