Novel architectures for deep learning exploit both activation and weight sparsity to improve the performance of DNN inference. However, this speedup usually brings non-negligible overheads which diminish the efficiency of such designs when running dense models. These overheads specifically are exacerbated for low precision accelerators with optimized SRAM size per core. This paper examines the design space trade-offs of such accelerators aiming to achieve competitive performance and efficiency metrics for all four combinations of dense or sparse activation/weight tensors. To do so, we systematically examine overheads of supporting sparsity on top of an optimized dense core. These overheads are modeled based on parameters that indicate how a multiplier can borrow a nonzero operation from the neighboring multipliers or future cycles. As a result of this exploration, we identify a few promising designs that perform better than prior work. Our findings suggest that even a best design targeting dual sparsity yields 20%-30% drop in power efficiency when performing on single sparse models, i.e., those with only sparse weight or sparse activation tensors. We introduce novel techniques to reuse resources of the same core to maintain high performance and efficiency when running single sparsity or dense models. We call this hybrid design Griffin. Griffin is 1.2, 3.0, 3.1, and 1.4X more power efficient than state-of-the-art sparse architectures, for dense, weight-only sparse, activation-only sparse, and dual sparse models, respectively.
翻译:用于深层次学习的新创新架构利用激活和重量的宽度来提高 DNN 发酵的性能。 但是,这种加速通常带来不可忽略的间接费用,在运行密度模型时会降低这种设计的效率。 这些间接费用对于低精度加速器来说特别加剧,因为每个核心都优化了SRAM大小。 本文审视了这种加速器的设计空间权衡,目的是实现所有四种混合的密集或稀散的活性/重量稀释性蒸发器的竞争性性能和效率衡量标准,以提高 DNNNN 发酵的性能。 然而,这种加速通常带来无法忽略的间接费用,从而降低这种设计在运行最优化的密度核心核心上的效率。 在运行单一稀薄模型时,我们采用新的技术来从相邻的倍增倍增倍增倍增速度或未来周期中借用非零操作。 作为这次探索的结果,我们发现了一些比先前工作更好的有希望的设计。 我们发现,即使最佳的双度缓度设计,在运行单一稀薄模型时,即重量或稀薄的软度软度软度软度软度缩的软度模型上。 我们引入了新的技术,用于同一核心结构的硬度结构设计, 将保持高度结构的智能结构的智能结构的智能结构的高度性能。