Tucker decomposition is one of the SOTA CNN model compression techniques. However, unlike the FLOPs reduction, we observe very limited inference time reduction with Tuckercompressed models using existing GPU software such as cuDNN. To this end, we propose an efficient end-to-end framework that can generate highly accurate and compact CNN models via Tucker decomposition and optimized inference code on GPUs. Specifically, we propose an ADMM-based training algorithm that can achieve highly accurate Tucker-format models. We also develop a high-performance kernel for Tucker-format convolutions and analytical performance models to guide the selection of execution parameters. We further propose a co-design framework to determine the proper Tucker ranks driven by practical inference time (rather than FLOPs). Our evaluation on five modern CNNs with A100 demonstrates that our compressed models with our optimized code achieve up to 3.14X speedup over cuDNN, 1.45X speedup over TVM, and 4.57X over the original models using cuDNN with up to 0.05% accuracy loss.
翻译:塔克分解是SOTA CNN 模式压缩技术之一。 然而,与FLOPs的缩减不同,我们观察到使用现有的GPU软件(如 cuDNN)使用Tucker压缩模型的极有限推导时间缩短。为此,我们提议了一个高效端对端框架,通过Tucker分解和最佳推导码生成高度准确和紧凑的CNN模型。具体地说,我们提议一个基于ADMM的ADM培训算法,可以实现非常精确的塔克格式模型。我们还为塔克格式的演算法和分析性性能模型开发了一个高性能内核,以指导执行参数的选择。我们进一步提议了一个共同设计框架,以确定由实际推算时间(而不是FLOPs)驱动的适当的塔克级。我们用A100对5个现代CNNS的评价表明,我们用优化代码的压缩模型在CUDNNN、1.45X超TVM和4.57X的原始模型上达到3.5 %的精度损失。