We present a compilation flow for the generation of CNN inference accelerators on FPGAs. The flow translates a frozen model into OpenCL kernels with the TVM compiler and uses the Intel OpenCL SDK to compile to an FPGA bitstream. We improve the quality of the generated hardware with optimizations applied to the base OpenCL kernels generated by TVM. These optimizations increase parallelism, reduce memory access latency, increase concurrency and save on-chip resources. We automate these optimizations in TVM and evaluate them by generating accelerators for LeNet-5, MobileNetV1 and ResNet-34 on an Intel Stratix~10SX. We show that the optimizations improve the performance of the generated accelerators by up to 846X over the base accelerators. The performance of the optimized accelerators is up to 4.57X better than TensorFlow on CPU, 3.83X better than single-threaded TVM and is only 0.34X compared to TVM with 56 threads. Our optimized kernels also outperform ones generated by a similar approach (that also uses high-level synthesis) while providing more functionality and flexibility. However, it underperforms an approach that utilizes hand-optimized designs. Thus, we view our approach as useful in pre-production environments that benefit from increased performance and fast prototyping, realizing the benefits of FPGAs without hardware design expertise.
翻译:我们为在FPGAs上生成CNN推导加速器提供了编译流程。 流中将一个冷冻模型转换成与 TVM 编译器的 OpenCL SDK 编译成 OpenCL SDK 的 OpenCL SDK 编译成FPGA 位流。 我们通过对TVM 生成的基础 OpenCL 内核应用优化来提高所生成硬件的质量。 这些优化增加了平行性能, 减少了内存存延缓度, 增加了共通度, 节省了上下调资源。 我们将这些优化在TVM 中自动安装这些优化, 并通过生成了LNet-5、 MovedNetV1 和 ResNet-34的加速器来评估这些优化。 我们显示, 优化的催化器的性能将提高至846X 。 优化的加速器的性能比 CPU 的有用性能更好, 3.83X 比单读 TVM 和仅生成0. 34X 与TVM 相比, 实现 快速的加速化方法, 也提高了性能, 并且提高了性能, 也提供了一种高性能 水平, 并且提供了一种高性能 。