We investigate properties of neural networks that use both ReLU and $x^2$ as activation functions and build upon previous results to show that both analytic functions and functions in Sobolev spaces can be approximated by such networks of constant depth to arbitrary accuracy, demonstrating optimal order approximation rates across all nonlinear approximators, including standard ReLU networks. We then show how to leverage low local dimensionality in some contexts to overcome the curse of dimensionality, obtaining approximation rates that are optimal for unknown lower-dimensional subspaces.
翻译:我们调查使用ReLU和$x%2美元作为激活功能的神经网络的特性,并以先前的结果为基础,表明Sobolev空间的分析功能和功能都可以被这种持续深度和任意精确度的网络所近似,表明所有非线性辅助设备,包括标准的ReLU网络的最佳定序近似率。然后,我们展示如何在某些情况下利用低本地维度来克服维度的诅咒,获得对未知的低维次空间最合适的近似率。