Channel pruning is widely used to reduce the complexity of deep network models. Recent pruning methods usually identify which parts of the network to discard by proposing a channel importance criterion. However, recent studies have shown that these criteria do not work well in all conditions. In this paper, we propose a novel Feature Shift Minimization (FSM) method to compress CNN models, which evaluates the feature shift by converging the information of both features and filters. Specifically, we first investigate the compression efficiency with some prevalent methods in different layer-depths and then propose the feature shift concept. Then, we introduce an approximation method to estimate the magnitude of the feature shift, since it is difficult to compute it directly. Besides, we present a distribution-optimization algorithm to compensate for the accuracy loss and improve the network compression efficiency. The proposed method yields state-of-the-art performance on various benchmark networks and datasets, verified by extensive experiments. The codes can be available at \url{https://github.com/lscgx/FSM}.
翻译:频道运行被广泛用来降低深层网络模型的复杂性。 最近的运行方法通常通过提出频道重要性标准来确定网络的哪些部分可以丢弃。 但是, 最近的研究显示, 这些标准在所有条件下都行不通。 在本文中, 我们提出了一个新的功能性转移最小化(FSM) 方法来压缩CNN模型, 通过调和功能和过滤器的信息来评估特征变化。 具体地说, 我们首先以不同层深度的某些普遍方法来调查压缩效率, 然后提出特征转换概念。 然后, 我们引入一种近似方法来估计特征转换的程度, 因为很难直接计算。 此外, 我们提出了一种分布性优化算法, 以补偿准确性损失, 并改进网络压缩效率。 拟议的方法产生各种基准网络和数据集的状态性能, 并通过广泛的实验加以验证。 代码可以在 url{https://github.com/ lscx/FSM} 上查阅。