Leveraging sparsity in deep neural network (DNN) models is promising for accelerating model inference. Yet existing GPUs can only leverage the sparsity from weights but not activations, which are dynamic, unpredictable, and hence challenging to exploit. In this work, we propose a novel architecture to efficiently harness the dual-side sparsity (i.e., weight and activation sparsity). We take a systematic approach to understand the (dis)advantages of previous sparsity-related architectures and propose a novel, unexplored paradigm that combines outer-product computation primitive and bitmap-based encoding format. We demonstrate the feasibility of our design with minimal changes to the existing production-scale inner-product-based Tensor Core. We propose a set of novel ISA extensions and co-design the matrix-matrix multiplication and convolution algorithms, which are the two dominant computation patterns in today's DNN models, to exploit our new dual-side sparse Tensor Core. Our evaluation shows that our design can fully unleash the dual-side DNN sparsity and improve the performance by up to one order of magnitude with \hl{small} hardware overhead.
翻译:在深神经网络(DNN)模型中,利用广度作用的深层神经网络(DNN)模型对加速模型推导很有希望。然而,现有的GPU只能从重重量而不是活化中利用宽度作用,这些重力是动态的、不可预测的,因而是难以开发的。在这项工作中,我们提出了一套新的结构结构,以便有效地利用双面的宽度(即,重量和活化的宽度),我们采取了一种系统的方法来理解以往与蒸气有关的结构的(差)优度,并提出一种新的、未探索的模式,将外产产品计算原始的和位图基的编码格式结合起来。我们的评估表明,我们设计的可行性与现有生产规模以内产品为基础的天体核心的最小变化。我们提出了一套新型的ISA扩展和共同设计矩阵矩阵矩阵倍增和演算法,这是当今DNNN模式中的两种主要计算模式,目的是利用我们新的双面稀薄的天体核心。我们的评估表明,我们的设计能够完全释放双面的DNNWS-Starsity,并且以一个大小改进中继的硬件。