The need to approximate functions is ubiquitous in science, either due to empirical constraints or high computational cost of accessing the function. In high-energy physics, the precise computation of the scattering cross-section of a process requires the evaluation of computationally intensive integrals. A wide variety of methods in machine learning have been used to tackle this problem, but often the motivation of using one method over another is lacking. Comparing these methods is typically highly dependent on the problem at hand, so we specify to the case where we can evaluate the function a large number of times, after which quick and accurate evaluation can take place. We consider four interpolation and three machine learning techniques and compare their performance on three toy functions, the four-point scalar Passarino-Veltman $D_0$ function, and the two-loop self-energy master integral $M$. We find that in low dimensions ($d = 3$), traditional interpolation techniques like the Radial Basis Function perform very well, but in higher dimensions ($d=5, 6, 9$) we find that multi-layer perceptrons (a.k.a neural networks) do not suffer as much from the curse of dimensionality and provide the fastest and most accurate predictions.
翻译:在高能物理学中,精确计算一个过程的散射截面需要评估计算密集的积分。在机器学习中使用了多种方法来解决这一问题,但往往缺乏使用一种方法来取代另一种方法的动机。比较这些方法通常高度取决于手头的问题,因此我们向案例说明我们可以对功能进行大量评价,然后可以进行快速和准确的评估。我们考虑四种中间和三种机器学习技术,并比较其在三个微小功能上的性能,四点的卡拉拉帕里诺-Veltman $D_0的功能,以及双环自能主元元的功能。我们发现,在低维度上(美元=3美元),像Radial基础功能这样的传统内插技术表现得很好,但在更高的维度上(美元=5、6、9美元),我们发现多层的透镜(a.kkona.enformal)网络的精确度(a.komislorality)不遭受最大程度和最高程度的螺旋。