This paper examines the design space trade-offs of DNNs accelerators aiming to achieve competitive performance and efficiency metrics for all four combinations of dense or sparse activation/weight tensors. To do so, we systematically examine the overheads of supporting sparsity on top of an optimized dense core. These overheads are modeled based on parameters that indicate how a multiplier can borrow a nonzero operation from the neighboring multipliers or future cycles. As a result of this exploration, we identify a few promising designs that perform better than prior work. Our findings suggest that even the best design targeting dual sparsity yields a 20%-30% drop in power efficiency when performing on single sparse models, i.e., those with only sparse weight or sparse activation tensors. We found that one can reuse resources of the same core to maintain high performance and efficiency when running single sparsity or dense models. We call this hybrid architecture Griffin. Griffin is 1.2, 3.0, 3.1, and 1.4X more power-efficient than state-of-the-art sparse architectures, for dense, weight-only sparse, activation-only sparse, and dual sparse models, respectively.
翻译:本文审视了DNN加速器的设计空间权衡, 目的是实现所有四种混合的密集或稀疏活化/重量强集体的竞争性性能和效率衡量标准。 为了这样做, 我们系统地检查在最优化稠密核心之上支持聚度的间接成本。 这些间接成本建模所依据的参数表明乘数如何能从邻近的乘数或未来周期中借用非零操作。 作为这次探索的结果, 我们发现了一些比以往工作更好的有希望的设计。 我们的研究结果表明, 即使是针对双重聚变的最佳设计, 在使用单一稀散模型时, 也就是那些只有稀散重量或稀少活性强的模型时, 也会产生20%-30%的功率效率下降。 我们发现, 当运行单一聚度或密集模型时, 同样的核心资源可以再利用, 以保持高性性能和效率。 我们称之为混合结构 Griff。 Griff是1.2、 3.0、 3.1 和 1. 4X 的功率比州级稀散结构要高。