We study the approximation capacity of some variation spaces corresponding to shallow ReLU$^k$ neural networks. It is shown that sufficiently smooth functions are contained in these spaces with finite variation norms. For functions with less smoothness, the approximation rates in terms of the variation norm are established. Using these results, we are able to prove the optimal approximation rates in terms of the number of neurons for shallow ReLU$^k$ neural networks. It is also shown how these results can be used to derive approximation bounds for deep neural networks and convolutional neural networks (CNNs). As applications, we study convergence rates for nonparametric regression using three ReLU neural network models: shallow neural network, over-parameterized neural network, and CNN. In particular, we show that shallow neural networks can achieve the minimax optimal rates for learning H\"older functions, which complements recent results for deep neural networks. It is also proven that over-parameterized (deep or shallow) neural networks can achieve nearly optimal rates for nonparametric regression.
翻译:本文研究了与浅层ReLU$^k$神经网络相对应的一些变化空间的逼近能力。通过有限的变化范数,可以证明充分光滑的函数包含在这些空间中。对于光滑度较低的函数,我们建立其变化范数的逼近率。利用这些结果,本文得出了深度为$k$的ReLU神经网络的最优逼近速率。同时,我们还展示了如何利用这些结果为深度神经网络和卷积神经网络(CNNs)导出逼近界限。作为应用,本文在三个ReLU神经网络模型——浅层神经网络、过度参数化的神经网络和CNN中研究了非参数回归的收敛速率。特别地,我们证明了浅层神经网络可以实现学习H\"older函数的最佳最大值低限速率,这补充了近期深度神经网络的相关结果。同时,本文还证明,过度参数化(深度或浅层)神经网络可以实现非参数回归的几乎最佳速率。