Filter pruning has been widely used in the compression and acceleration of convolutional neural networks (CNNs). However, most existing methods are still challenged by heavy compute cost and biased filter selection. Moreover, most designs for filter evaluation miss interpretability due to the lack of appropriate theoretical guidance. In this paper, we propose a novel filter pruning method which evaluates filters in a interpretable, multi-persepective and data-free manner. We introduce information capacity, a metric that represents the amount of information contained in a filter. Based on the interpretability and validity of information entropy, we propose to use that as a quantitative index of information quantity. Besides, we experimently show that the obvious correlation between the entropy of the feature map and the corresponding filter, so as to propose an interpretable, data-driven scheme to measure the information capacity of the filter. Further, we introduce information independence, another metric that represents the correlation among differrent filters. Consequently, the least impotant filters, which have less information capacity and less information independence, will be pruned. We evaluate our method on two benchmarks using multiple representative CNN architectures, including VGG-16 and ResNet. On CIFAR-10, we reduce 71.9% of floating-point operations (FLOPs) and 69.4% of parameters for ResNet-110 with 0.28% accuracy increase. On ILSVRC-2012, we reduce 76.6% of floating-point operations (FLOPs) and 68.6% of parameters for ResNet-50 with only 2.80% accuracy decrease, which outperforms the state-of-the-arts.


翻译:在压缩和加速神经神经网络(CNNs)时广泛使用了过滤器。然而,大多数现有方法仍然受到大量计算成本和偏差过滤器选择的质疑。此外,大多数过滤器评价设计由于缺乏适当的理论指导而缺乏解释性。在本文中,我们提议了一种新的过滤处理方法,以可解释、多视角和无数据的方式评价过滤器。我们引入了信息能力,这是代表过滤器所含信息量的衡量标准。根据信息的可解释性和有效性,我们提议使用该方法作为信息数量的数量指数的定量指数。此外,我们实验地表明,由于缺少适当的理论指导,因此,用于筛选器信息能力与加速器的可解释性、多透视性和无数据性,因此,我们使用多种具有代表性的IMFLS-LS(IMO-IMO-LIS) 的精确度(IMO-LFR-LS-LV) 和 RIS- AS- AS- IM- IMFR) 结构中,包括 VG- IM- IM- IM- IM- IM- IM- 系统- IMFLFLS- RLS- 结构中,我们用多种代表的准确性(IM- IM- IM- IM- IM- RLS- IM- IM- RLFR- IM- IM- RLFL) IM- Ris 的精确性能 降低了两种基准方法,我们 和 IM-R- IM- IM-R- IM-R- IM- 和 IM- IM- IM- IM- IM- IM- 和 的 IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- IM- </s>

0
下载
关闭预览

相关内容

《计算机信息》杂志发表高质量的论文,扩大了运筹学和计算的范围,寻求有关理论、方法、实验、系统和应用方面的原创研究论文、新颖的调查和教程论文,以及描述新的和有用的软件工具的论文。官网链接:https://pubsonline.informs.org/journal/ijoc
不可错过!《机器学习100讲》课程,UBC Mark Schmidt讲授
专知会员服务
74+阅读 · 2022年6月28日
剑桥大学《数据科学: 原理与实践》课程,附PPT下载
专知会员服务
49+阅读 · 2021年1月20日
《DeepGCNs: Making GCNs Go as Deep as CNNs》
专知会员服务
30+阅读 · 2019年10月17日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
【哈佛大学商学院课程Fall 2019】机器学习可解释性
专知会员服务
103+阅读 · 2019年10月9日
强化学习三篇论文 避免遗忘等
CreateAMind
19+阅读 · 2019年5月24日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
逆强化学习-学习人先验的动机
CreateAMind
16+阅读 · 2019年1月18日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
17+阅读 · 2018年12月24日
Capsule Networks解析
机器学习研究会
11+阅读 · 2017年11月12日
可解释的CNN
CreateAMind
17+阅读 · 2017年10月5日
国家自然科学基金
1+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
3+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
VIP会员
相关资讯
强化学习三篇论文 避免遗忘等
CreateAMind
19+阅读 · 2019年5月24日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
逆强化学习-学习人先验的动机
CreateAMind
16+阅读 · 2019年1月18日
Unsupervised Learning via Meta-Learning
CreateAMind
42+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
17+阅读 · 2018年12月24日
Capsule Networks解析
机器学习研究会
11+阅读 · 2017年11月12日
可解释的CNN
CreateAMind
17+阅读 · 2017年10月5日
相关基金
国家自然科学基金
1+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
3+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2013年12月31日
国家自然科学基金
1+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2009年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Top
微信扫码咨询专知VIP会员