The recent breakthroughs in deep neural networks (DNNs) have spurred a tremendously increased demand for DNN accelerators. However, designing DNN accelerators is non-trivial as it often takes months/years and requires cross-disciplinary knowledge. To enable fast and effective DNN accelerator development, we propose DNN-Chip Predictor, an analytical performance predictor which can accurately predict DNN accelerators' energy, throughput, and latency prior to their actual implementation. Our Predictor features two highlights: (1) its analytical performance formulation of DNN ASIC/FPGA accelerators facilitates fast design space exploration and optimization; and (2) it supports DNN accelerators with different algorithm-to-hardware mapping methods (i.e., dataflows) and hardware architectures. Experiment results based on 2 DNN models and 3 different ASIC/FPGA implementations show that our DNN-Chip Predictor's predicted performance differs from those of chip measurements of FPGA/ASIC implementation by no more than 17.66% when using different DNN models, hardware architectures, and dataflows. We will release code upon acceptance.
翻译:最近深神经网络(DNN)的突破刺激了对DNN加速器的极大增长需求。然而,设计DNN加速器并非三进制,因为它往往需要几个月/年的时间,需要跨学科知识。为了能够快速和有效地开发DNN加速器,我们提议DNN-芯片加速器(DNN-Chip Annoror),这是一个分析性能预测器,可以准确预测DNNN加速器的能量、输送量和在实际实施之前的延迟度。我们的预测器有两个亮点:(1) DNN ASIC/FPGA加速器的分析性能配制促进了快速设计空间探索和优化;(2)它支持DNNNC加速器,使用不同的算法到硬件的绘图方法(即数据流)和硬件结构。基于2 DNNN模型和3个不同的ACIC/FGA执行量的实验结果显示,我们的DN-Chip 预测性能与FGA/ACIC执行的芯片测量结果的性能不及17.66%以上。当我们使用不同的接受数据模型和硬件结构时,将使用不同的接受时,将使用不同的硬件结构。