For Human Action Recognition tasks (HAR), 3D Convolutional Neural Networks have proven to be highly effective, achieving state-of-the-art results. This study introduces a novel streaming architecture based toolflow for mapping such models onto FPGAs considering the model's inherent characteristics and the features of the targeted FPGA device. The HARFLOW3D toolflow takes as input a 3D CNN in ONNX format and a description of the FPGA characteristics, generating a design that minimizes the latency of the computation. The toolflow is comprised of a number of parts, including i) a 3D CNN parser, ii) a performance and resource model, iii) a scheduling algorithm for executing 3D models on the generated hardware, iv) a resource-aware optimization engine tailored for 3D models, v) an automated mapping to synthesizable code for FPGAs. The ability of the toolflow to support a broad range of models and devices is shown through a number of experiments on various 3D CNN and FPGA system pairs. Furthermore, the toolflow has produced high-performing results for 3D CNN models that have not been mapped to FPGAs before, demonstrating the potential of FPGA-based systems in this space. Overall, HARFLOW3D has demonstrated its ability to deliver competitive latency compared to a range of state-of-the-art hand-tuned approaches being able to achieve up to 5$\times$ better performance compared to some of the existing works.
翻译:针对人体动作识别任务(HAR),3D 卷积神经网络已被证明具有高效性能,达到了最先进的结果。本研究介绍了一种新的基于流式体系结构的工具流,用于将这样的模型映射到 FPGA 上,并考虑模型的固有特性和目标 FPGA 设备的特征。HARFLOW3D 工具流以 ONNX 格式的 3D CNN 和 FPGA 特性描述作为输入,生成最小化计算延迟的设计。该工具流由许多部分组成,包括:i)3D CNN 解析器,ii)性能和资源模型,iii)调度算法,用于在生成的硬件上执行 3D 模型,iv)针对 3D 模型量身定制的资源感知优化引擎,v)用于 FPGAs 的可综合代码的自动映射。通过在不同的 3D CNN 和 FPGA 系统对上进行多项实验,显示出该工具流支持广泛的模型和设备,并且对于以前未映射到 FPGA 的 3D CNN 模型产生了高性能的结果,展示了 FPGA 系统在这个领域的潜力。总的来说,HARFLOW3D 已经证明了其能够提供与各种先进的手动调整方法相比具有竞争性的延迟,能够实现高达某些现有作品的 5 倍性能。