Given $n$ samples of a function $f\colon D\to\mathbb C$ in random points drawn with respect to a measure $\varrho_S$ we develop theoretical analysis of the $L_2(D, \varrho_T)$-approximation error. For a parituclar choice of $\varrho_S$ depending on $\varrho_T$, it is known that the weighted least squares method from finite dimensional function spaces $V_m$, $\dim(V_m) = m < \infty$ has the same error as the best approximation in $V_m$ up to a multiplicative constant when given exact samples with logarithmic oversampling. If the source measure $\varrho_S$ and the target measure $\varrho_T$ differ we are in the domain adaptation setting, a subfield of transfer learning. We model the resulting deterioration of the error in our bounds. Further, for noisy samples, our bounds describe the bias-variance trade off depending on the dimension $m$ of the approximation space $V_m$. All results hold with high probability. For demonstration, we consider functions defined on the $d$-dimensional cube given in unifom random samples. We analyze polynomials, the half-period cosine, and a bounded orthonormal basis of the non-periodic Sobolev space $H_{\mathrm{mix}}^2$. Overcoming numerical issues of this $H_{\text{mix}}^2$ basis, this gives a novel stable approximation method with quadratic error decay. Numerical experiments indicate the applicability of our results.
翻译:暂无翻译