Graphics Processing Units (GPUs) are currently the dominating programmable architecture for Deep Learning (DL) accelerators. The adoption of Field Programmable Gate Arrays (FPGAs) in DL accelerators is however getting momentum. In this paper, we demonstrate that Direct Hardware Mapping (DHM) of a Convolutional Neural Network (CNN) on an embedded FPGA substantially outperforms a GPU implementation in terms of energy efficiency and execution time. However, DHM is highly resource intensive and cannot fully substitute the GPU when implementing a state-of-the-art CNN. We thus propose a hybrid FPGA-GPU DL acceleration method and demonstrate that heterogeneous acceleration outperforms GPU acceleration even including communication overheads. Experimental results are conducted on a heterogeneous multi-platform setup embedding an Nvidia(R) Jetson TX2 CPU-GPU board and an Intel(R) Cyclone10GX FPGA board. The SqueezeNet, MobileNetv2, and ShuffleNetv2 mobile-oriented CNNs are experimented. We show that heterogeneous FPG-AGPU acceleration outperforms GPU acceleration for classification inference task over MobileNetv2 (12%-30% energy reduction, 4% to 26% latency reduction), SqueezeNet (21%-28% energy reduction, same latency), and ShuffleNetv2 (25% energy reduction, 21% latency reduction).
翻译:图形处理器( GPU) 是当前用于深学习加速器的可编程架构。 在 DL 加速器中采用外地可编程门阵列( FPGAs) 正在获得动力。 在本文中, 我们显示, 嵌入的 FPGA 的动态神经网络( DHM) 直接硬件绘图( DHM) 大大优于 GPU 的能效和执行时间。 然而, DHM 资源密集,无法完全取代 GPU, 以实施最先进的CNN 。 因此, 我们建议采用一种混合的 FPGA- GPU DL加速法( FPG- GPU ) 加速器加速器, 甚至包括通信管理费。 实验结果是在嵌入 Nvidia (R) Jetson TX2 PU-GPU 平面板板板和 Intel (R) Cycar10GX FPGA 平面板上, 递减速 SquezeNet2, 和ShuffleNet2- LEGPOL 递减速递减% GPIPOL 递缩缩缩缩缩缩缩为GNGM 递减局。