Pruning is a promising approach to compress complex deep learning models in order to deploy them on resource-constrained edge devices. However, many existing pruning solutions are based on unstructured pruning, which yields models that cannot efficiently run on commodity hardware and require users to manually explore and tune the pruning process, which is time-consuming and often leads to sub-optimal results. To address these limitations, this paper presents an adaptive, activation-based, structured pruning approach to automatically and efficiently generate small, accurate, and hardware-efficient models that meet user requirements. First, it proposes iterative structured pruning using activation-based attention feature maps to effectively identify and prune unimportant filters. Then, it proposes adaptive pruning policies for automatically meeting the pruning objectives of accuracy-critical, memory-constrained, and latency-sensitive tasks. A comprehensive evaluation shows that the proposed method can substantially outperform the state-of-the-art structured pruning works on CIFAR-10 and ImageNet datasets. For example, on ResNet-56 with CIFAR-10, without any accuracy drop, our method achieves the largest parameter reduction (79.11%), outperforming the related works by 22.81% to 66.07%, and the largest FLOPs reduction (70.13%), outperforming the related works by 14.13% to 26.53%.
翻译:压缩复杂的深层学习模型是一种很有希望的方法,以压缩复杂的深层学习模型,从而在资源紧缺的边缘装置上部署这些模型。然而,许多现有的修剪解决方案都是基于未结构化的修剪图,产生无法有效运行商品硬件的模型,要求用户人工探索和调整剪剪裁过程,这一过程耗时,往往导致不理想的结果。为了解决这些局限性,本文件提出了一个适应性、启动性、结构化的修剪方法,以自动和高效生成满足用户需要的小型、准确和硬件效率高的模型。首先,它提出使用基于激活的注意特征图进行迭代结构裁剪,以有效识别和提炼不重要的过滤器。然后,它提出了适应性修剪裁政策,以自动实现精密、记忆限制和耐久性的工作目标。 全面评估表明,拟议的方法可以大大超过CIFFAR-10和图像网络数据集中结构化的修剪剪裁工作。 例如,ResNet-56, 和 CIRAR-10,不精确性地显示任何精确性下降,我们的方法完成了最大规模的削减%。</s>