Neural network pruning is a popular model compression method which can significantly reduce the computing cost with negligible loss of accuracy. Recently, filters are often pruned directly by designing proper criteria or using auxiliary modules to measure their importance, which, however, requires expertise and trial-and-error. Due to the advantage of automation, pruning by evolutionary algorithms (EAs) has attracted much attention, but the performance is limited for deep neural networks as the search space can be quite large. In this paper, we propose a new filter pruning algorithm CCEP by cooperative coevolution, which prunes the filters in each layer by EAs separately. That is, CCEP reduces the pruning space by a divide-and-conquer strategy. The experiments show that CCEP can achieve a competitive performance with the state-of-the-art pruning methods, e.g., prune ResNet56 for $63.42\%$ FLOPs on CIFAR10 with $-0.24\%$ accuracy drop, and ResNet50 for $44.56\%$ FLOPs on ImageNet with $0.07\%$ accuracy drop.
翻译:神经网络运行是一种流行的模型压缩方法,可以大大降低计算成本,但准确性损失微乎其微。最近,过滤器往往通过设计适当的标准或使用辅助模块直接处理,以衡量其重要性,然而,这需要专门知识和试镜。由于自动化的好处,由进化算法(EAs)运行已引起很大的注意,但对于深神经网络来说,其性能有限,因为搜索空间可能很大。在本文中,我们提议通过合作共进制来一个新的过滤算法CCCEP,它分别将每个层的过滤器从EAs中提取出来。这就是说,CECEP通过分解和控制战略缩小了运行空间。实验表明,CECEP可以取得与最先进的运行方法的竞争性能,例如,Prune ResNet56在CIFAR10上为63.42 $FLOPs, 精确下降0.24 美元,而ResNet50 用于44.56 美元 FLOPs在图像网络上以0.07 美元准确性下降。