We study lower bounds on the worst-case error of numerical integration in tensor product spaces. As reference we use the $N$-th minimal error of linear rules that use $N$ function values. The information complexity is the minimal number $N$ of function evaluations that is necessary such that the $N$-th minimal error is less than a factor $\varepsilon$ times the initial error. We are interested to which extent the information complexity depends on the number $d$ of variables of the integrands. If the information complexity grows exponentially fast in $d$, then the integration problem is said to suffer from the curse of dimensionality. Under the assumption of the existence of a worst-case function for the uni-variate problem we present two methods for providing good lower bounds on the information complexity. The first method is based on a suitable decomposition of the worst-case function. This method can be seen as a generalization of the method of decomposable reproducing kernels, that is often successfully applied when integration in Hilbert spaces with a reproducing kernel is studied. The second method, although only applicable for positive quadrature rules, has the advantage, that it does not require a suitable decomposition of the worst-case function. Rather, it is based on a spline approximation of the worst-case function and can be used for analytic functions. The methods presented can be applied to problems beyond the Hilbert space setting. For demonstration purposes we apply them to several examples, notably to uniform integration over the unit-cube, weighted integration over the whole space, and integration of infinitely smooth functions over the cube. Some of these results have interesting consequences in discrepancy theory.
翻译:暂无翻译