Convolutional neural networks (CNNs) are becoming increasingly deeper, wider, and non-linear because of the growing demand on prediction accuracy and analysis quality. The wide and deep CNNs, however, require a large amount of computing resources and processing time. Many previous works have studied model pruning to improve inference performance, but little work has been done for effectively reducing training cost. In this paper, we propose ClickTrain: an efficient and accurate end-to-end training and pruning framework for CNNs. Different from the existing pruning-during-training work, ClickTrain provides higher model accuracy and compression ratio via fine-grained architecture-preserving pruning. By leveraging pattern-based pruning with our proposed novel accurate weight importance estimation, dynamic pattern generation and selection, and compiler-assisted computation optimizations, ClickTrain generates highly accurate and fast pruned CNN models for direct deployment without any time overhead, compared with the baseline training. ClickTrain also reduces the end-to-end time cost of the state-of-the-art pruning-after-training methods by up to about 67% with comparable accuracy and compression ratio. Moreover, compared with the state-of-the-art pruning-during-training approach, ClickTrain reduces the accuracy drop by up to 2.1% and improves the compression ratio by up to 2.2X on the tested datasets, under similar limited training time.
翻译:由于对预测准确性和分析质量的需求日益增加,对预测准确性和分析质量的需求日益增长,连锁神经网络的深度、广度和非线性正在日益加深、广度和非线性化。但是,广度和深度的CNN需要大量的计算资源和处理时间。许多以前的工作都研究过模型调整,以提高推断性能,但在有效降低培训成本方面却没有做多少工作。在本文中,我们提议ClickTrain:为CNNs提供高效和准确的端对端培训和剪裁框架。与现有的运行培训工作不同,ClickTrain通过精细的架构保存处理提供更高的模型准确性和压缩比率。通过利用基于模式的剪裁,利用我们拟议的新的精确重量估计、动态模式的生成和选择以及编译者辅助的计算优化,ClickTrain生成了非常准确和快速的CNN模型,用于直接部署,而无需任何时间管理,与基线培训相比。ClickTrain还降低了州级后裁剪裁员率率,通过精度的精度和精确性排序,将数据压率提高到67%。