Convolutional neural networks (CNNs) are becoming increasingly deeper, wider, and non-linear because of the growing demand on prediction accuracy and analysis quality. The wide and deep CNNs, however, require a large amount of computing resources and processing time. Many previous works have studied model pruning to improve inference performance, but little work has been done for effectively reducing training cost. In this paper, we propose ClickTrain: an efficient and accurate end-to-end training and pruning framework for CNNs. Different from the existing pruning-during-training work, ClickTrain provides higher model accuracy and compression ratio via fine-grained architecture-preserving pruning. By leveraging pattern-based pruning with our proposed novel accurate weight importance estimation, dynamic pattern generation and selection, and compiler-assisted computation optimizations, ClickTrain generates highly accurate and fast pruned CNN models for direct deployment without any extra time overhead, compared with the baseline training. ClickTrain also reduces the end-to-end time cost of the pruning-after-training method by up to 2.3X with comparable accuracy and compression ratio. Moreover, compared with the state-of-the-art pruning-during-training approach, ClickTrain provides significant improvements both accuracy and compression ratio on the tested CNN models and datasets, under similar limited training time.
翻译:由于对预测准确性和分析质量的需求不断增加,ClickTrain提供了更高的模型精确度和压缩率。然而,广而深的CNN系统需要大量的计算资源和处理时间。许多以前的工作都研究过模型调整,以提高推断性能,但几乎没有为有效降低培训成本做多少工作。在本文中,我们提议ClickTrain:为CNNs提供一个高效和准确的端对端培训和非线性运行框架。与现有的运行培训工作不同的是,ClickTrain通过精细的架构保存运行提供更高的模型精确度和压缩率。通过利用基于模式的运行,利用我们拟议的新的精确重量估计、动态模式的生成和选择以及编译者辅助的计算优化,ClickTrain生成了非常准确和快速的CNN模式,直接部署而不增加任何时间管理费,与基线培训相比。ClickTrain还降低了州际培训方法的端对端对端至端时间的成本,同时以可比的准确性和压缩率率率率率进行了对比。此外,ClinkTrading-training the drifirmal-trainal-training the detraining dal-trainaltrading the dal-trading trading lax betrading 和Circurrvical-trading 两种数据都提供了重要的精确性和压缩比率。