The Gaussian kernel plays a central role in machine learning, uncertainty quantification and scattered data approximation, but has received relatively little attention from a numerical analysis standpoint. The basic problem of finding an algorithm for efficient numerical integration of functions reproduced by Gaussian kernels has not been fully solved. In this article we construct two classes of algorithms that use $N$ evaluations to integrate $d$-variate functions reproduced by Gaussian kernels and prove the exponential or super-algebraic decay of their worst-case errors. In contrast to earlier work, no constraints are placed on the length-scale parameter of the Gaussian kernel. The first class of algorithms is obtained via an appropriate scaling of the classical Gauss-Hermite rules. For these algorithms we derive lower and upper bounds on the worst-case error of the forms $\exp(-c_1 N^{1/d}) N^{1/(4d)}$ and $\exp(-c_2 N^{1/d}) N^{-1/(4d)}$, respectively, for positive constants $c_1 > c_2$. The second class of algorithms we construct is more flexible and uses worst-case optimal weights for points that may be taken as a nested sequence. For these algorithms we derive upper bounds of the form $\exp(-c_3 N^{1/(2d)})$ for a positive constant $c_3$.
翻译:Gausian 内核在机器学习、不确定性量化和分散的数据近似方面发挥着核心作用,但从数字分析的角度来说,却相对较少受到注意。 找到一种算法, 将高山内核复制的函数有效数字整合的基本问题尚未完全解决。 在本条中, 我们构建了两类算法, 使用美元评价来整合高山内核复制的美元差异函数, 并证明其最坏情况错误的指数或超级数值衰减。 与先前的工作相比, 高山内核的长度参数没有受到任何限制。 第一类算法是通过对古典高山内核规则的适当缩放来获得的。 对于这些算法, 我们用最坏情况错误的 $(- c_ 1 n% 1 n1/ d} N% 1 (4d} 美元) 和 $\ c (2 n% 1 n_ 1/ 1/ ) 第二条 内核内核内核内核的长度参数。 最坏的算法级中, 我们用最坏的正正值2 3 c_ 级的正数 级计算, 最硬的正数级的基数级, 我们用最硬的正数级的正数级的C2_ brod_ bassal_ bassal_ bal_ bassal_ bal_ bassal_ bal_ bal_ bal_ bal_ bal_ bal_ bal_ bassal_ bal_ bal_ bal_ bal_ bal_ bal_ bal_ bal_ rod_ sal_ rod_ bs, 我们, rod_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sal_ sald_ sald_ sald_ salbalbalbalbalbalbalbalbalbalbsalalalsalsalsalsalsalsalsalsalsalsalsalsalsalsalsalsalsals, 我们, 我们, 我们, al__c_c_c_c_c_