One of the fundamental problems in deep learning theory is understanding the approximation and generalization properties of two-layer neural networks in high dimensions. In order to tackle this issue, researchers have introduced the Barron space $\mathcal{B}_s(\Omega)$ and the spectral Barron space $\mathcal{F}_s(\Omega)$, where the index $s$ characterizes the smoothness of functions within these spaces and $\Omega\subset\mathbb{R}^d$ represents the input domain. However, it is still not clear what is the relationship between the two types of Barron spaces. In this paper, we establish continuous embeddings between these spaces as implied by the following inequality: for any $\delta\in (0,1), s\in \mathbb{N}^{+}$ and $f: \Omega \mapsto\mathbb{R}$, it holds that \[ \delta\gamma^{\delta-s}_{\Omega}\|f\|_{\mathcal{F}_{s-\delta}(\Omega)}\lesssim_s \|f\|_{\mathcal{B}_s(\Omega)}\lesssim_s \|f\|_{\mathcal{F}_{s+1}(\Omega)}, \] where $\gamma_{\Omega}=\sup_{\|v\|_2=1,x\in\Omega}|v^Tx|$ and notably, the hidden constants depend solely on the value of $s$. Furthermore, we provide examples to demonstrate that the lower bound is tight.
翻译:暂无翻译