In the next decade, the demands for computing in large scientific experiments are expected to grow tremendously. During the same time period, CPU performance increases will be limited. At the CERN Large Hadron Collider (LHC), these two issues will confront one another as the collider is upgraded for high luminosity running. Alternative processors such as graphics processing units (GPUs) can resolve this confrontation provided that algorithms can be sufficiently accelerated. In many cases, algorithmic speedups are found to be largest through the adoption of deep learning algorithms. We present a comprehensive exploration of the use of GPU-based hardware acceleration for deep learning inference within the data reconstruction workflow of high energy physics. We present several realistic examples and discuss a strategy for the seamless integration of coprocessors so that the LHC can maintain, if not exceed, its current performance throughout its running.
翻译:在未来十年中,大型科学实验中的计算需求预计会大幅增长。 在同一期间, CPU的性能增长将有限。 在中央核子网络大型散子对撞机(LHC), 这两个问题将互相面对, 因为在高光度运行中对对撞机进行升级。 图形处理器(GPUs)等替代处理器可以解决这一对峙, 只要算法能够足够加速。 在许多情况下, 算法加速是采用深层次学习算法后最大的。 我们展示了在高能物理数据重建工作流程中利用基于GPU的硬件加速进行深层学习推断的全面探索。 我们举出了几个现实的例子,并讨论了协同处理器的无缝整合战略,以使LHC在其整个运行过程中能够保持、甚至超过其当前性能。