In this paper, we consider Barron functions $f : [0,1]^d \to \mathbb{R}$ of smoothness $\sigma > 0$, which are functions that can be written as \[ f(x) = \int_{\mathbb{R}^d} F(\xi) \, e^{2 \pi i \langle x, \xi \rangle} \, d \xi \quad \text{with} \quad \int_{\mathbb{R}^d} |F(\xi)| \cdot (1 + |\xi|)^{\sigma} \, d \xi < \infty. \] For $\sigma = 1$, these functions play a prominent role in machine learning, since they can be efficiently approximated by (shallow) neural networks without suffering from the curse of dimensionality. For these functions, we study the following question: Given $m$ point samples $f(x_1),\dots,f(x_m)$ of an unknown Barron function $f : [0,1]^d \to \mathbb{R}$ of smoothness $\sigma$, how well can $f$ be recovered from these samples, for an optimal choice of the sampling points and the reconstruction procedure? Denoting the optimal reconstruction error measured in $L^p$ by $s_m (\sigma; L^p)$, we show that \[ m^{- \frac{1}{\max \{ p,2 \}} - \frac{\sigma}{d}} \lesssim s_m(\sigma;L^p) \lesssim (\ln (e + m))^{\alpha(\sigma,d) / p} \cdot m^{- \frac{1}{\max \{ p,2 \}} - \frac{\sigma}{d}} , \] where the implied constants only depend on $\sigma$ and $d$ and where $\alpha(\sigma,d)$ stays bounded as $d \to \infty$.
翻译:在本文中, 我们考虑巴伦函数 $f : [0, 1, d\\ quad\ text}}\ dquathb} 美元平滑的$ $\ sgma> 0$, 这些函数可以写成 [f(x) =\ int\ mathb{R\ d} F(xx)\, e\ 2\ p\ i ligle x,\xx\rangle} \, d\xxx\ d=qual ; d\xxx=mab} 平滑的 = $ 美元平滑的 。 这些函数在机器学习中扮演着突出的角色, 因为它们可以被( shall) 神经网络快速地接近, 而不受水度的诅咒 。 对于这些函数, 我们研究以下问题: $(x) 点的样本 $(x_ ) 美元, px\\\\\\\\\\\\\\ ma\ ma\ ma\ mas a modeal modeal sax a sax h) sax 。