Channel pruning is formulated as a neural architecture search (NAS) problem recently. However, existing NAS-based methods are challenged by huge computational cost and inflexibility of applications. How to deal with multiple sparsity constraints simultaneously and speed up NAS-based channel pruning are still open challenges. In this paper, we propose a novel Accurate and Automatic Channel Pruning (AACP) method to address these problems. Firstly, AACP represents the structure of a model as a structure vector and introduces a pruning step vector to control the compressing granularity of each layer. Secondly, AACP utilizes Pruned Structure Accuracy Estimator (PSAE) to speed up the performance estimation process. Thirdly, AACP proposes Improved Differential Evolution (IDE) algorithm to search the optimal structure vector effectively. Because of IDE, AACP can deal with FLOPs constraint and model size constraint simultaneously and efficiently. Our method can be easily applied to various tasks and achieve state of the art performance. On CIFAR10, our method reduces $65\%$ FLOPs of ResNet110 with an improvement of $0.26\%$ top-1 accuracy. On ImageNet, we reduce $42\%$ FLOPs of ResNet50 with a small loss of $0.18\%$ top-1 accuracy and reduce $30\%$ FLOPs of MobileNetV2 with a small loss of $0.7\%$ top-1 accuracy. The source code will be released after publication.
翻译:最近将频道运行设计成神经结构搜索(NAS)问题。然而,现有的NAS方法受到计算成本巨大和应用程序不灵活的挑战。如何同时应对多种封闭性限制并加快NAS频道运行仍然是尚未解决的挑战。在本文件中,我们提出一个新的精密和自动通道预留(AACP)方法来解决这些问题。首先,AACP代表模型的结构,作为结构矢量,并引入一个支离破碎的步骤矢量以控制每个层的压缩颗粒。第二,ACP利用普罗尼德结构精度精确度模拟器(PSAE)来加快性能估测进程。第三,ACP提出改进差异演化(IDE)算法,以有效搜索最佳结构矢量。由于IDE,ACP可以同时有效处理FLOP的制约和模型大小限制。我们的方法将很容易应用于各种任务,并实现艺术绩效状态。在CFFFFAR1010中,我们的方法将FLOP-1100的精度降低FLO值的精度,而ResNet的顶端值为AS 110的精度。