In this paper it is shown that $C_\beta$-smooth functions can be approximated by neural networks with parameters $\{0,\pm \frac{1}{2}, \pm 1, 2\}$. The depth, width and the number of active parameters of constructed networks have, up to a logarithimc factor, the same dependence on the approximation error as the networks with parameters in $[-1,1]$. In particular, this means that the nonparametric regression estimation with constructed networks attain the same convergence rate as with the sparse networks with parameters in $[-1,1]$.
翻译:本文显示, $C ⁇ beta$- smooth 函数可被神经网络所近似, 参数为$@0,\pm\frac{1\ ⁇ 2},\pm 1, 2 ⁇ 2$。 所建网络的深度、宽度和有效参数数的深度、宽度和数量,与以 $[1,1,1]为参数的网络一样,对近似误差的依赖程度与以 $[1,1,1]为参数的网络相同。这尤其意味着, 所建网络的非参数回归估计达到与以 $[1,1,1]为参数的稀少网络相同的趋同率。