Filter pruning has been widely used for neural network compression because of its enabled practical acceleration. To date, most of the existing filter pruning works explore the importance of filters via using intra-channel information. In this paper, starting from an inter-channel perspective, we propose to perform efficient filter pruning using Channel Independence, a metric that measures the correlations among different feature maps. The less independent feature map is interpreted as containing less useful information$/$knowledge, and hence its corresponding filter can be pruned without affecting model capacity. We systematically investigate the quantification metric, measuring scheme and sensitiveness$/$reliability of channel independence in the context of filter pruning. Our evaluation results for different models on various datasets show the superior performance of our approach. Notably, on CIFAR-10 dataset our solution can bring $0.75\%$ and $0.94\%$ accuracy increase over baseline ResNet-56 and ResNet-110 models, respectively, and meanwhile the model size and FLOPs are reduced by $42.8\%$ and $47.4\%$ (for ResNet-56) and $48.3\%$ and $52.1\%$ (for ResNet-110), respectively. On ImageNet dataset, our approach can achieve $40.8\%$ and $44.8\%$ storage and computation reductions, respectively, with $0.15\%$ accuracy increase over the baseline ResNet-50 model. The code is available at https://github.com/Eclipsess/CHIP_NeurIPS2021.
翻译:在本文中,我们提议从频道间的角度出发,采用不同地貌地图之间的关联度量标准,即测量不同地貌地图之间关联度的度量标准。不那么独立的地貌地图被解释为包含较少有用的信息/美元知识,因此其相应的过滤器可以在不影响模型能力的情况下被切割。我们系统地调查量化衡量标准、测量计划和在过滤管道内运行情况下频道独立性的敏感度/美元。我们对各种数据集不同模型的评价结果显示了我们的方法的优异性。值得注意的是,在CFAR-10数据设置上,我们的解决办法可以分别带来0.75美元和0.94美元,比基线ResNet-56和ResNet-110模型的准确度分别增加0.75美元和0.94美元,与此同时,模型规模和FLOP可在不影响模型能力的情况下减少42.8美元和47.4美元(ResNet-56)和48.3美元/美元/美元频道独立度/美元。我们的模型-50美元和50美元(ResNet5-10美元)的存储率分别增加。