Single hidden layer feedforward neural networks can represent multivariate functions that are sums of ridge functions. These ridge functions are defined via an activation function and customizable weights. The paper deals with best non-linear approximation by such sums of ridge functions. Error bounds are presented in terms of moduli of smoothness. The main focus, however, is to prove that the bounds are best possible. To this end, counterexamples are constructed with a non-linear, quantitative extension of the uniform boundedness principle. They show sharpness with respect to Lipschitz classes for the logistic activation function and for certain piecewise polynomial activation functions. The paper is based on univariate results in (Goebbels, St.: On sharpness of error bounds for univariate approximation by single hidden layer feedforward neural networks. Results Math 75 (3), 2020, article 109, https://rdcu.be/b5mKH).
翻译:单个隐藏层 feedforward 神经网络可以代表多种变量功能, 即脊柱功能的总和。 这些脊柱功能是通过激活功能和可定制的重量来定义的。 纸张用这样的脊柱函数的数值处理最佳的非线性近似。 错误界限用平滑的模量来表示。 但是, 主要焦点是证明这些界限是最佳可能的。 为此, 反试样是用统一约束性原则的非线性、 定量扩展来构建的。 它们显示利普西茨等级对于后勤激活功能和某些片状多边激活功能的清晰度。 纸张以单向结果为基础( Goebbbels, St.: 关于单向上隐藏层向向神经网络的误差界限。 结果 Mat 75 (3), 2020, 第109条, https://rdcu.be/b5mKH)。