Recurrent neural networks have been shown to be effective architectures for many tasks in high energy physics, and thus have been widely adopted. Their use in low-latency environments has, however, been limited as a result of the difficulties of implementing recurrent architectures on field-programmable gate arrays (FPGAs). In this paper we present an implementation of two types of recurrent neural network layers -- long short-term memory and gated recurrent unit -- within the hls4ml framework. We demonstrate that our implementation is capable of producing effective designs for both small and large models, and can be customized to meet specific design requirements for inference latencies and FPGA resources. We show the performance and synthesized designs for multiple neural networks, many of which are trained specifically for jet identification tasks at the CERN Large Hadron Collider.
翻译:经常性神经网络已证明是高能物理学中许多任务的有效结构,因此已被广泛采用,但在低纬度环境中,由于在可实地规划的门阵列上实施经常性结构的困难,这些网络的使用有限。在本文件中,我们介绍了在hls4ml框架内实施两种经常性神经网络层 -- -- 长期短期内存和大门内常态单元 -- -- 的情况。我们表明,我们的实施能够为小型和大型模型制作有效设计设计,并且可以定制,以满足关于推断延迟和FPGA资源的具体设计要求。我们展示了多个神经网络的性能和合成设计,其中许多网络在CERN大哈德伦对撞机专门进行喷气识别任务的培训。