In this paper it is shown that $C_\beta$-smooth functions can be approximated by neural networks with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$. The depth, width and the number of active parameters of the constructed networks have, up to a logarithmic factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with the constructed networks attains the same convergence rate as with sparse networks with parameters in $[-1,1]$.
翻译:本文显示,以 $0,\\pm\frac{1\ ⁇ 2},\ pm 1, 2 ⁇ 2$参数的神经网络可以近似于 $C ⁇ beta$- smooth 函数。所建网络的深度、宽度和有效参数数的深度、宽度和数量,与以 $[1,1,1]为参数的网络对近似误差的依赖程度相同。这尤其意味着,对已建网络的非参数回归率估计与以 $[1,1,1]为参数的稀少网络的趋同率相同。