We study the uniform approximation of echo state networks with randomly generated internal weights. These models, in which only the readout weights are optimized during training, have made empirical success in learning dynamical systems. We address the representational capacity of these models by showing that they are universal under weak conditions. Our main result gives a sufficient condition for the activation function and a sampling procedure for the internal weights so that echo state networks can approximate any continuous casual time-invariant operators with high probability. In particular, for ReLU activation, we quantify the approximation error of echo state networks for sufficiently regular operators.
翻译:我们研究的是具有随机生成的内部重量的回声状态网络的统一近似值。这些模型中只有读数加权值在培训期间得到优化,在学习动态系统方面取得了经验性的成功。我们通过显示这些模型在薄弱条件下是普遍性的来应对这些模型的代表性能力。我们的主要结果为激活功能提供了充分的条件,并为内部重量提供了一个抽样程序,以便回声状态网络能够以高概率近似任何连续的临时性时差操作员。特别是对于RELU的启动,我们将回声状态网络的近似误数量化给足够正常的操作员。